跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) angular(82) LLM(78) 大语言模型(63) 人工智能(53) 前端开发(50) LangChain(43) golang(43) 机器学习(39) Go工程师(38) Go程序员(38) Go开发者(36) React(33) Go基础(29) Python(24) Vue(22) Web开发(20) Web技术(19) 精选资源(19) 深度学习(19) Java(18) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) Next.js(12) 安卓(11) 聊天机器人(10) typescript(10) 资料精选(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) ChatGPT(9) LLMOps(9) Go语言中级开发(9) 自然语言处理(9) PostgreSQL(9) 区块链(9) mlops(9) 安全(9) 全栈开发(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 软件架构(7) RAG(7) Go语言高级开发(7) AWS(7) C++(7) 数据科学(7) whisper(6) Prisma(6) 隐私保护(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) 隐私沙盒(5) FedCM(5) 智能体(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 提示工程(5) Agent(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) kafka(5) 推荐系统(5) WebAssembly(5) GameDev(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) devin(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) 编程语言(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) 移动开发(4) 移动应用(4) security(4) 隐私(4) spring-boot(4) 物联网(4) nextjs(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) RAG架构(3) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) LLM Agent(3) Remix(3) Ubuntu(3) GPT4All(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) 认证(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) 语言模型(2) 可穿戴设备(2) JDK(2) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) 数据分析(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) Machine Learning(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)

category

Building LLM-powered Applications

The past few weeks have been exciting for developers interested in deploying AI-powered applications. The field is evolving quickly, and it is now possible to build AI-powered applications without having to spend months or years learning the ins and outs of machine learning. This opens up a whole new world of possibilities, as developers can now experiment with AI in ways that were never before possible.

Foundation models, particularly large language models (LLMs), are now accessible to developers with minimal or no background in machine learning or data science. These agile teams, skilled in rapid iteration, can swiftly develop, test, and refine innovative applications showcased on platforms like Product Hunt. Significantly, this cohort of developers operates at a much quicker pace compared to the majority of Data and AI teams.

Building apps that rely on LLMs and other foundation models.
Custom Models

The current prevailing approach for developers is to use proprietary LLMs through APIs. However, as we explained in a recent post, factors such as domain specificity, security, privacy, regulations, IP protection and control, will prompt more organizations to opt to invest in their own custom LLMs. As an example, Bloomberg recently detailed how they built BloombergGPT, an LLM for finance. In addition, several examples of finely-tuned, medium-sized models have captured the attention of both researchers and developers, paving the way for more of them to create their own custom LLMs. 

  • Commission a custom model: There are new startups that provide the necessary resources and expertise to help companies fine-tune or even train their own large language models. For example, PowerML enables organizations to surpass the performance of general-purpose LLMs by leveraging RLHF and fine-tuning techniques on their own data.
  • Fine-tune an existing model: I described popular fine-tuning techniques in a previous post. I expect more open source resources, including models and datasets with appropriate licenses, will become available, enabling teams to use them as a starting point to build their own custom models. For instance, Cerebras just open-sourced a collection of LLMs under the Apache 2.0 license.
  • Training models from scratch: After conducting online research on the production of LLMs and gathering insights from friends with experience in training tools, it became evident that there are a number of open-source components that are commonly utilized for training foundational models. The distributed computing framework Ray, is widely used to train foundation models. While PyTorch is the deep learning framework used by many LLM creators, some teams prefer alternative frameworks such as JAX or even homegrown libraries that are popular in China. Ivy is an innovative open-source solution enabling code transpilation between machine learning frameworks, promoting seamless collaboration and versatility across various sources.

Many organizations that develop foundation models have dedicated teams for safety, alignment, and responsible AI. Teams that opt to build their own custom models should make similar investments.

The recent proliferation of open-source models and tools has significantly expanded the available options for teams seeking to create custom LLMs.
Third-party integrations

OpenAI recently launched a new feature called “plugins” for its ChatGPT language model, which allows developers to create tools that can access up-to-date information, run computations, or use third-party services.   Companies such as Expedia, Instacart, and Shopify have already used the feature to create plugins. Third-party developers can develop plugins that range from simple calculators to more complex tools like language translation and Wolfram Alpha integration.

  • As the creator of Terraform noted, the ChatGPT plugins interface is extremely easy to use: “you write an OpenAPI manifest for your API, use human language descriptions for everything, and that’s it. You let the model figure out how to auth, chain calls, process data in between, format it for viewing, etc. There’s absolutely zero glue code.”
  • Other LLM providers are likely to offer similar resources to help developers integrate with external services. Open-source tools like LangChain and LlamaIndex were early to help developers build apps that rely on external services and sources. I expect rapid progress on third-party integration tools for building LLM-backed applications in the near future.
  • Tools such as LangChain and LlamaIndex, or even a potential open protocol for plugin-sharing between LLMs, hold appeal for developers seeking the flexibility to interchange models or target multiple LLM providers. Such tools allow developers to use the best LLM for a particular task, without being locked into a single provider.
    .
Knowledge Bases

Knowledge graphs and other external data sources can be used to enhance LLMs by providing complementary, domain-specific, factual information. We are starting to see tools that facilitate the connection to existing data sources and formats, including to new systems like vector databases. These tools enable the creation of indices over both structured and unstructured data, allowing for in-context learning. Additionally, they provide an interface for querying the index and obtaining knowledge-augmented output, which enhances the accuracy and relevance of the information provided.

Serving Models

Software services require several key features to meet the demands of modern computing. They must be responsive, highly available, secure, flexible, and interoperable across platforms and systems, while also being capable of handling large volumes of users and providing real-time processing and analytics capabilities. The deployment of LLMs presents unique challenges due to their size, complexity, and cost.

  • The open-source library, Ray Serve, perfectly aligns with the requirements of AI applications, as it empowers developers to construct a scalableefficient, and flexible inference service able to integrate multiple machine learning models and Python-based business logic. Here’s an example of how to deploy an LLM with Ray Serve.
  • The rise of smaller and more streamlined models will improve the efficiency of LLMs in a range of applications. We’re beginning to see impressive LLMs such as LLaMA and Chinchilla that are a fraction of the size of the largest models available. Furthermore, compression and optimization techniques like pruning, quantization, and distillation will play an increasingly important role in the use of LLMs, following the path set by computer vision, with notable early examples being DistilBERT, Hugging Face DistilGPT2, distill-bloom, and PyTorch Quantization.
Summary

The proliferation of tools and resources for building LLM-powered applications has opened a new world of possibilities for developers. These tools allow developers to leverage the power of AI without having to learn the complexities of machine learning. As more organizations invest in their own custom LLMs and open-source resources become more widely available, the landscape for LLM-powered applications will become more diverse and fragmented. This presents both opportunities and challenges for developers.

It is important to remember that with great power comes great responsibility. Organizations must invest in safety, alignment, and responsible AI to guarantee that LLM-powered applications are employed for positive and ethical purposes.

An early sign that more tools are on the way: the Winter/2023 YC batch includes new tools to help teams build, customize, deploy, and manage LLMs in the future.

Data Exchange Podcast

1. How Data and AI Happened.  Chris Wiggins is a Professor at Columbia University and the Chief Data Scientist at The New York Times. He is also co-author of How Data Happened, a fascinating historical exploration of how data has been used as a tool in shaping society, from the census to eugenics to Google search. The book traces the trajectory of data and explores new mathematical and computational techniques that serve to shape people, ideas, society, and economies.

2. Uncovering AI Trends: Pioneering Research and Uncharted Horizons.  Jakub Zavrel, the Founder and CEO at Zeta Alpha, discusses the 100 most cited AI papers of 2022, this year’s trending research topics, and the future of language models, multimodal AI, and beyond. He highlights the dominance of transformers, the rise of multimodal models, the significance of synthetic data, custom large language models, chain-of-thought reasoning, and next-gen search technology.


Spotlight

1. Introducing NLP Test This much-needed, open source tool helps to improve the quality and reliability of NLP models. It is simple to use and provides comprehensive test coverage, helping to ensure that models are safe, effective, and responsible. The library offers over 50 test types compatible with popular NLP libraries and tasks, addressing model quality aspects such as robustness, bias, fairness, representation, and accuracy before deployment in production systems.

2. Microwave from BNH.  Microwave is a free AI-based bias assessment tool, designed to assist businesses in adhering to New York City’s Local Law 144. This legislation mandates the evaluation of potential biases in automated employment decision-making systems. It has been utilized for auditing AI systems for clients ranging from Fortune 100 companies to software startups, helping them measure and manage AI risks.

3. Training 175B Parameter Language Models at 1000 GPU scale with Alpa and Ray.  Alpa is an open source compiler system for automating and democratizing model-parallel training of large deep learning models. It generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on models they are designed for. This post discusses the integration of Alpa and Ray to train a 175B parameter model equivalent to GPT-3 (OPT-175B) model with pipeline parallelism. The benchmarks show that Alpa can scale beyond 1000 GPUs, achieve state-of-the-art peak GPU utilization, and perform automatic LLM parallelization and partitioning with one line decorator.