最后7B参数模型胜过GPT-4!
We are entering the era of small & highly efficient models!
TimeGPT:时间序列预测的第一个基础模型
写杀手提示:掌握人工智能惊人结果的提示工程
Prompt engineering, or prompt design, is crafting instructions for LLMs to get desired responses. It’s essential for ensuring accurate, high-quality responses from a large language model.
高级RAG 07:探索表格的RAG
Implementing RAG presents a challenge, especially when it comes to effectively parsing and understanding tables in unstructured documents. This is particularly difficult with scanned documents or documents in image format. There are at least three aspects of these challenges:
使用LangChain、LLM和Streamlit构建用于复杂SQL数据库交互的聊天应用程序
In this article we will see how we can use large language models (LLMs) to interact with a complex database using Langchain
agents and tools, and then deploying the chat application using Streamlit
.
🐍 Python开发者震惊了!多线程前进!
使用LLM从非结构化文本中提取结构化数据
This is Part 1 of my “Understanding Unstructured Data” series. Part 2 focuses on analyzing structured data extracted from unstructured text with a LangChain agent.
Apache Kafka+矢量数据库+LLM=实时GenAI
Generative AI (GenAI) enables advanced AI use cases and innovation but also changes how the enterprise architecture looks like. Large Language Models (LLM), Vector Databases, and Retrieval Augmentation Generation (RAG) require new data integration patterns and data engineering best practices.
使用PySide6构建您的第一个桌面应用程序[数据科学家版]
如何在LLM应用程序中提高RAG结果:从基础到高级
If you’re building any meaningful product/feature with LLMs (large language models), you’ll probably use the technique called RAG (retrieval-augmented generation). It can allow you to integrate external data that was not available in the LLM’s training data into the LLM’s text generation process, which can greatly reduce the nightmare of hallucination and improve the relevance of the text responses.