跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) angular(82) LLM(78) 大语言模型(63) 人工智能(53) 前端开发(50) LangChain(43) golang(43) 机器学习(39) Go工程师(38) Go程序员(38) Go开发者(36) React(33) Go基础(29) Python(24) Vue(22) Web开发(20) Web技术(19) 精选资源(19) 深度学习(19) Java(18) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) Next.js(12) 安卓(11) 聊天机器人(10) typescript(10) 资料精选(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) ChatGPT(9) LLMOps(9) Go语言中级开发(9) 自然语言处理(9) PostgreSQL(9) 区块链(9) mlops(9) 安全(9) 全栈开发(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 软件架构(7) RAG(7) Go语言高级开发(7) AWS(7) C++(7) 数据科学(7) whisper(6) Prisma(6) 隐私保护(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) 隐私沙盒(5) FedCM(5) 智能体(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 提示工程(5) Agent(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) kafka(5) 推荐系统(5) WebAssembly(5) GameDev(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) devin(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) 编程语言(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) 移动开发(4) 移动应用(4) security(4) 隐私(4) spring-boot(4) 物联网(4) nextjs(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) RAG架构(3) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) LLM Agent(3) Remix(3) Ubuntu(3) GPT4All(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) 认证(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) 语言模型(2) 可穿戴设备(2) JDK(2) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) 数据分析(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) Machine Learning(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)

category

TL;DR: While the existing LLM app tools like LangChain and LlamaIndex are useful for building LLM apps, their data loading capabilities aren’t recommended outside of initial experimentation. As I built and tested my LLM app pipeline I was able to feel the pain of some of the aspects that are under developed and hacked together. If you’re planning to build a production ready data pipeline to fuel your LLM apps you should heavily consider using an EL tool purpose built for the job.

Introduction

In the last year since OpenAI released ChatGPT there’s been a ton of excitement and a seemingly endless amount of new content and apps being built on top of their technology. As the dust has started to settle from the initial hype we’ve seen the developer ecosystem start to take a step back and assess where we’re at so far. Two recent posts that were shared in the Meltano community piqued our interest and motivated us to do a deeper dive.

The first is a16z’s “Emerging Architectures for LLM Applications” article highlighting the emerging patterns for building Large Language Model (LLM) applications which highlighted the data pre-processing/embedding stage and how “this piece of the stack is relatively underdeveloped, though, and there’s an opportunity for data-replication solutions purpose-built for LLM apps” 

The second was this spicey Reddit thread titled “Langchain Is Pointless“ which later hit hacker news  as well.

“Why is this just not ETL, why do you need anything here? There is no new category or product needed here.” and “It is pointless – LlamaIndex and LangChain are re-inventing ETL – why use them when you have robust technology already?”

Langchain is grouped in the Orchestration stage by a16z and is one of the main tools used by LLM app developers to do everything from extracting data from sources, to embedding documents and storing them in vector databases, to prompt chaining and storing memory for chat based apps…don’t worry if you dont understand some of these terms, I’ll explain them in the next section. 

Inspired by those posts, I went on a deep dive to get a better understanding of what’s going on. I read through tons of articles, watched and listened to lots of youtube videos and podcast (on 2x speed), and built my own LLM chat app for answering questions based on the Meltano SDK docs.

In this blog post I’ll go through the following:

  1. I’ll give you a summary of what I found when I dug into the LLM app ecosystem and the challenges that come with it
  2. I’ll do my own read out from a data engineering perspective of how we’ve solved similar problems in the past and propose an ideal architecture
  3. I’ll explain my POC built using Meltano and a vision for how Meltano could help solve these challenges in the future

Part 1 – A summary of the LLM app ecosystem

I’ll try to do my best to summarize what I’ve found in a concise way so you have enough context about the LLM app ecosystem to see through the jargon. I’ve found that, like a lot of other ecosystems, there’s a lot of abbreviations and jargon but once you peel back the layers it starts  to look like common patterns you’ve seen before.

In-context Learning vs Fine Tuning

The first concept that I came across was using fine tuning vs in-context learning + Retrieval Augmented Generation (RAG). The problem that both of these techniques are trying to solve is that the pre-trained LLM’s (like OpenAI’s GPT models or open source alternatives like Hugging Face) were trained on static datasets which are now outdated, this is why when you ask ChatGPT questions about current events it will give you a canned response like “I’m sorry, I was trained on data from 2021…”.

The high level difference between the two is that in-context learning sends context to pre-trained models as part of the prompt at runtime to help it understand more about your question. Whereas fine tuning is further training the pre-trained models to take into account a new set of contextual data then all future prompts go directly to your new iteration of the model.

My takeaway was that for the majority of use cases the in-context learning approach has been the most popular. For more details on in-context learning I’d refer you to the a16z summary but my takeaways are:

  • In context learning…
    • performs reasonably well for most LLM use cases as part of a RAG pipeline and is the preferred approach
    • leverages “off the shelf” tools like OpenAI’s API and Vector databases like Pinecone so a small data team can build an LLM app without having to hire specialized ML engineers
  • Fine tuning…
    • performs better in narrowly focused contexts when the dataset is large and high quality
    • requires more know-how around getting your data to be properly weighted, i.e. not over or under indexing on your content
    • requires you to host your own models and infrastructure for serving it

For these reasons I chose to focus primarily on in-context learning apps for the rest of my exploration.

Vector Databases and Embeddings

The next subject that came up was vector databases and embeddings.

A vector database is where your contextual data is stored so that it can be quickly searched at runtime to find semantically similar data for your in-context prompt. This is your knowledge base. Vector databases are not a new technology that came during the LLM wave but they’ve definitely gained a lot of popularity for this use case. Some common ones are PineconeWeaviateChromapgvector Postgres extension, and many others.

I’m not going to go into a lot of detail on how they work, mostly because I don’t know enough 😀 but also because I found that the inner workings of a vector database were too low level and out of scope for this practical understanding of how to use them. Here’s my watered down explanation or how you use vector databases in the context of LLM apps…

The concept is that you convert your source text data (i.e. our SDK docs html text) into embeddings which are large arrays of numbers using an embedding model like OpenAI’s embedding API which you then load into a vector database with the source text used to generate the embeddings attached. For the purposes of building LLM apps you don’t necessarily need to understand the details about what embeddings are and how they’re generated. These text blobs have a size limit so there’s an intermediate step here to chunk them up into smaller subsets. Then when you want to search for similar texts in your chat app you simply embed your input text (again with the OpenAI API) and pass the array to the database as the query. The database takes care of the magic behind retrieving similar vectors and ultimately returning your results which have the original text blobs attached to them. At this point you can collect all the relevant text snippets and use them as context for your new enriched chat prompt.

Langchain

This is the most talked about tool when it comes to building LLM apps. It abstracts away some of these complicated workflows and gives the developer simpler interfaces to build on. There’s debate on whether it does a good job or not but nonetheless that’s the goal. From my view LangChain seems like a nice abstraction for the app layer where things like prompt chaining, memory, and retrieval of context data from vector databases are needed. Although Langchain’s scope starts to go beyond its reach with data connection capabilities that feel like reinventing the wheel in a less production grade way.

LlamaIndex

Similarly there’s tons of mentions of LlamaIndex which has a lot of overlap with LangChain in terms of features and in fact uses LangChain under the hood quite a bit. LlamaIndex apparently has some other nice app level features for building chat apps like context searching, caching, etc. LlamaIndex also has a library of data loaders and tools that are advertised on a sibling project called LlamaHub. Again the main thing that I struggle with is that their scope creeps into that of a data movement tool, their docs say the following in the “Why LlamaIndex?” section:

“Applications built on top of LLMs often require augmenting these models with private or domain-specific data. Unfortunately, that data can be distributed across siloed applications and data stores. It’s behind APIs, in SQL databases, or trapped in PDFs and slide decks.”

So it seems they’re marketing it somewhat as a data movement tool 🤔 but the problem is that they’re reinventing the wheel in a less production grade way. See the slack reader with just a while loop or this medium authors critique that:

“the confluence data loader from Llama is simply a wrapper around the html2text python library and dumps the entire confluence page into a string variable”

Part 2 – What we can learn from Data Engineering

It feels like these projects and the landscape were changing so quickly that nobody had time to stop and consider the scope of the libraries and where the boundaries should be. From my perspective these tools leak into the data engineering space and try to solve problems that have unique challenges and are better left to purpose built data engineering tools.

The data ecosystem has been solving many similar problems over the years so let’s compare the two workflows and find the overlap. Maybe we can take some lessons from DE.

When I think about a summary of what most of these LLM apps are doing, I’d bucket them like this:

  • Data extraction – e.g. pull message text from the slack API
  • Data cleansing – e.g. remove certain characters, extra spaces, encoding, etc.
  • Data enrichment – embedding
  • Data loading – write to vector databases
  • Application UX i.e. prompt chaining, retrieval, inference, memory, chat UI, etc.

And those buckets look a lot like what a traditional data team has been doing for years:

  • Data extraction – e.g. pull data from a variety of sources
  • Data enrichment and transformation – e.g. remove duplicates, add consistent names, aggregate complex data into consumable business metrics, etc.
  • Data loading – write to a data warehouse
  • Data visualization and consumption – charts and dashboards that tell a story about the data

For the purposes of my argument we can drop the application UX because in my opinion that’s the core competency of tools like LangChain and they do it well, use LangChain for that. On the data side we can also drop the data visualization phase because for many data teams the visualization step is passed over to analysts and BI engineers who have the hard job of working with data consumers to interpreting and presenting the data nicely.

That leaves us with the rest of the steps which in both cases can be narrowed down to the following:

  • Extraction
  • Transform – i.e. Cleaning and Enriching
  • Loading

In the data world we’ve iterated on this process over the years, originally calling it Extract Transform Load (ETL) and now transitioning to Extract Load Transform (ELT). The lesson we learned was that extraction is slow and expensive so we only want to do it once whereas the transformation and enrichment step is less expensive and requires more iteration. In addition, over time the cost of storage reduced significantly. Separating storage from compute became a major design consideration. With these realizations we re-designed our systems to decouple the two workflows and now most data teams Extract + Load raw data one time, then transform it many times.

This directly translates to the LLM app world because many teams are experimenting and iterating on the best ways to build their knowledge bases and apps. They’re cleaning their raw data differently, or embedded using different models, or using different vector stores.

 

What are the most expensive parts of LLM data movement?

If we took this ETL vs ELT learning and applied it to the LLM app development workflow we start to see that it almost directly translates. When the data community was iterating on ETL vs ELT we evaluated the most expensive steps and designed patterns and tools to reduce the impact of them. From my view the most expensive part of the LLM data processing steps are:

  • Extracting
  • Enriching

Same as for traditional data engineering, extracting from a database or API is slow, expensive, and sometimes painful. In addition to the extraction step, the enrichment step is also expensive especially if you’re using an API like OpenAI’s embedding API. The enrichment step almost starts to look like a second extraction step. Of course there are some ways to reduce the impact of these expensive steps but it’s important to pinpoint the processes that we’re trying to optimize.

Relatable Data Engineering Challenges

At a surface level creating a script that pulls data from an API, cleans and enriches it, then writes it out to some data source seems simple. I’d actually agree. The hard part is all the extra features needed to get it to run reliably and efficiently in production. Some things that data engineers are frequently thinking about and dealing with are:

  • Rate limited APIs and outages
  • Pagination
  • Metadata and logging
  • Schema validation and data quality
  • Personal Identifiable Information (PII) handling, obfuscation, removal, etc.
  • Keeping incremental state between runs so you can pick up where you left off
  • Schema change management
  • Backfilling data

These LLM libraries currently have capabilities to pull data from an API but are they handling all the challenges related to putting those pipelines in production? The answer is no. But to be fair I don’t think they set out to do this, they built features that users needed at the time, it’s just the nature of fast moving open source projects. Re: LlamaIndex’s slack reader.

How would we design a system around these challenges?

Given all this information I sketched up an example architecture that feels fitting for a workflow like this. The diagram reads from left to right; extracting data from your sources, cleaning and embedding your text, then loading it to the vector database, and with the LLM app itself being out of scope for now.

You can see that the main premise is that we’re persisting progress on each step. At first that might seem like extra overhead but especially in a world where we might want to iterate on the cleaning and embedding steps we’ll be very happy to not have to re-extract or re-clean all the source data every time. Additionally given the right tools it should be easy to incrementally update each of these steps in the workflow for only new data since the last run i.e. only retrieve API data from yesterday. With incremental workflows you won’t have to worry so much about API rate limiting, and processing performance because your dataset is much smaller.

Additionally the nature of checkpointing progress at each step allows you to more easily substitute components over time. If you use Pinecone today, you could replace it with Weaviate tomorrow without having to re-extract all your data and create new embeddings once again.

Part 3 – Meltano and LLM Apps

If I’ve convinced you that using data engineering patterns and tools for these workloads is a good idea, we can now talk about why I think Meltano is a good fit. I’ll explain the proof of concept I built using Meltano, and the features that enabled it. Then I’ll talk about how we can iterate on it to make it better in the future. You can also explore the fully functioning demo project on GitHub.

I built a simple pipeline to scrape our Meltano SDK docs site, clean the html, embed it using OpenAI’s embedding API, then load it into Pinecone. This is an iteration on the original. In order to validate my output I also ended up finding a sample Streamlit LLM chat app that retrieves data from my Pinecone index for context and adapted it for my needs. All of this is represented as code in the demo project and is also included in the Meltano Squared project which is deployed in production on Meltano Cloud.

Due to the time constraints of my POC I only implemented a simple end to end sync without the checkpointing features, although the fact that each component is a distinct plugin will allow me to easily extend the project in the future to include robust checkpointing. Other competing tools chose to use Langchain under the hood to chunk, embed, and load vector database data all in one tightly coupled step which I think is a design flaw. Tightly coupling all of these steps and doesn’t allow you to iterate to a more robust design over time.

Each component is a plugin that can be installed and run in a Meltano project so you could recreate this POC without writing any code. You simply add the plugins and configure them for your use case. MeltanoHub also has 600+ plugins to help you pull data from any other source you’d like. Let’s walk through each step. 

Extract:

The extract step uses tap-beautifulsoup configured to scrape the SDK docs. It downloads all relevant html pages locally then processes them into individual records with the beautifulsoup library.

 

meltano.yml


 

- name: tap-beautifulsoup

variant: meltanolabs

pip_url: git+https://github.com/MeltanoLabs/tap-beautifulsoup.git@v0.1.0

config:

source_name: sdk-docs

site_url: https://sdk.meltano.com/en/latest/

output_folder: output

parser: html.parser

download_recursively: true

find_all_kwargs:

attrs:

role: main

 

To preview the output you can run meltano invoke tap-beautifulsoup and see output that looks like:

log


 

2023-08-17T15:28:57.408509Z [info ] Environment 'dev' is active

2023-08-17 11:28:58,886 | INFO | tap-beautifulsoup | Beginning full_table sync of 'page_content'...

{"type": "SCHEMA", "stream": "page_content", "schema": {"properties": {"source": {"type": ["string", "null"]}, "page_content": {"description": "The page content.", "type": ["string", "null"]}, "metadata": {"properties": {"source": {"type": ["string", "null"]}}, "type": ["object", "null"]}}, "type": "object"}, "key_properties": []}

{"type": "RECORD", "stream": "page_content", "record": {"source": "output/sdk.meltano.com/en/latest/typing.html", "page_content": "JSON Schema Helpers#\nClasses and functions to streamline…..[Trimmed Content]", "metadata": {"source": "output/sdk.meltano.com/en/latest/typing.html"}}, "time_extracted": "2023-08-17T15:29:38.975515+00:00"}

Clean:

For this I chose to write a small script with some custom parsing logic to remove extra spaces and new line characters. This script gets executed for each record that’s passed through the pipeline by a Meltano mapper called generic-mapper.

clean_text.py


 

def map_record_message(self, message_dict: dict) -> t.Iterable[Message]:

page_content = message_dict["record"]["page_content"]

text_nl = " ".join(page_content.split("\n"))

text_spaces = " ".join(text_nl.split())

message_dict["record"]["page_content"] = text_spaces

return message_dict

This mapper allows you to run arbitrary python scripts so you might choose to install the unstructured library as a dependency or Langchain itself to help prep your data in your ideal way.

Generate Embeddings:

In this step we also use a mapper to process each record in the pipeline but this time we use map-gpt-embeddings. This mapper splits the input record into chunks (if needed) then generates embeddings using the OpenAI embeddings API. The mapper uses the Meltano extractor SDK under the hood to leverage all of the nice features like pagination, rate limit handling, etc. with minimal code.

meltano.yml


 

- name: map-gpt-embeddings

variant: meltanolabs

pip_url: git+https://github.com/MeltanoLabs/map-gpt-embeddings.git

mappings:

- name: add-embeddings

config:

document_text_property: page_content

document_metadata_property: metadata

Load:

Then finally the pipeline uses target-pinecone to write these records to your Pinecone index. 

meltano.yml


 

- name: target-pinecone

variant: meltanolabs

config:

index_name: target-pinecone-index

environment: asia-southeast1-gcp-free

document_text_property: page_content

embeddings_property: embeddings

metadata_property: metadata

pinecone_metadata_text_key: text

load_method: overwrite

These steps are all stitched together as a single scheduled Meltano job but can be run manually using a simple command like `meltano run tap-beautifulsoup clean-text add-embeddings target-pinecone`. You can self-host this if you’d like, or use Meltano Cloud to handle the infrastructure needed to run your predefined schedules.

Future Directions

In the next iteration I’m planning to leverage Singer JSONL extractor and loader to implement the checkpointing features discussed earlier. This will unlock the ability to quickly reload from a checkpoint, preserve data backups, and quickly experiment in the clean + embedding steps (e.g. try different embedding models, cleaning techniques, etc.).

Summary

While the existing LLM app tools like LangChain and LlamaIndex are useful for building LLM apps, they’re data loading capabilities aren’t recommended outside of initial experimentation. As I built and tested my LLM app pipeline I was able to feel the pain of some of the aspects that are under developed and hacked together. If you’re planning to build a production ready data pipeline to fuel your LLM apps you should heavily consider using an EL tool purpose built for the job.