IBM and Red Hat Introduce InstructLab for Collaborative LLM Customization

IBM and Red Hat Introduce InstructLab for Collaborative LLM Customization







image?url=https%3A%2F%2Fresearch-website-prod-cms-uploads.s3.us.cloud-object-storage.appdomain.cloud%2FAI_Cube_Loop_32b79bec2e.gif&w=1200&q=75

IBM Research, in collaboration with Red Hat, has launched InstructLab, an innovative open-source project designed to facilitate the collaborative customization of large language models (LLMs) without necessitating full retraining. This initiative aims to streamline the integration of community contributions into base models, significantly reducing the time and effort traditionally required.

InstructLab’s Mechanism

InstructLab operates by augmenting human-curated data with high-quality examples generated by an LLM, thereby lowering the cost of data creation. This data can then be used to enhance the base model without requiring it to be retrained from scratch, which is a substantial cost-saving measure. IBM Research has already utilized InstructLab to generate synthetic data for improving its open-source Granite models for language and code.

“There’s no good way to combine all of that innovation into a coherent whole,” said David Cox, vice president for AI models at IBM Research.

Recent Applications

Researchers recently used InstructLab to refine an IBM 20B Granite code model, transforming it into an expert for modernizing software written for IBM Z mainframes. This process demonstrated both speed and effectiveness, which led to IBM forming a strategic partnership with Red Hat.

IBM’s current solution for mainframe modernization, the watsonx Code Assistant for Z, was fine-tuned on paired COBOL-Java programs. These were amplified through traditional rules-based synthetic generators and enhanced further using InstructLab’s capabilities.

“The most exciting part of InstructLab is its ability to generate new data from traditional knowledge sources,” noted Ruchir Puri, chief scientist at IBM Research. An updated version of WCA for Z is expected to be released soon.

How InstructLab Works

InstructLab features a command-line interface (CLI) that enables users to add and merge new alignment data to their target model via a GitHub workflow. This CLI acts as a test kitchen for trying out new “recipes” for generating synthetic data to teach an LLM new knowledge and skills.

The backend of InstructLab is powered by IBM Research’s synthetic data generation and phased-training method known as Large-Scale Alignment for ChatBots (LAB). This method uses a taxonomy-driven approach to create high-quality data for specific tasks, ensuring that new information can be assimilated without overwriting previously learned data.

“Instead of having a large company decide what your model knows, InstructLab lets you dictate through its taxonomy what knowledge and skills your model should have,” said Akash Srivastava, the IBM researcher who led the team that developed LAB.

Community Collaboration

InstructLab encourages community participation by allowing users to experiment with local versions of IBM’s Granite-7B and Merlinite-7B models, and submit improvements as pull requests to the InstructLab taxonomy on GitHub. Project maintainers review the proposed skills, and if they meet community guidelines, the data is generated and used to fine-tune the base model. Updated versions are then released back to the community on Hugging Face.

IBM has dedicated its AI supercomputer, Vela, to updating InstructLab models weekly. As the project scales, other public models may be included. The Apache 2.0 license governs all data and code generated by the project.

The Power of Open Source

Open-source software has been a cornerstone of the internet, driving innovation and security. InstructLab aims to bring these benefits to generative language models by providing transparent, collaborative tools for model customization. This initiative follows IBM and Red Hat’s long history of open-source contributions, including projects like PyTorch, Kubernetes, and the Red Hat OpenShift platform.

“This breakthrough innovation unlocks something that was next to impossible before — the ability for communities to contribute to models and improve them together,” said Máirín Duffy, software engineering manager of the Red Hat Enterprise Linux AI team.

For more details, visit the official IBM Research blog.

Image source: Shutterstock

. . .

Tags




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *