

"Enterprises require this kind of accuracy," says Wright. Red Hat says it knows that, so it and IBM have been focusing on making sure the data their LLMs are using is good and legal. The courts are working on that exact question.īusinesses, once they recover from getting drunk on AI's potential, must deal with these issues. For example, if you use code from GitHub CoPilot, do you know if the code it produces is sourced from a copyrighted open source project? Can you be sued for using it? Stay tuned. Knowing exactly what's in LLMs is rapidly becoming a critical issue for quality, accuracy, and legal issues. This is similar to the push by the open source community towards a Software Bills of Material (SBOM) to make sure open source code really is what it says it is.

Wright told us: "We make sure the models are accurate because we build metrics into the whole end-to-end process." This includes business metrics to make sure your projects aren't just technically successful but deliver successful results for your business as well.Īnother major plus that IBM and Red Hat are bringing to the table is that unlike the AI projects getting all the headlines, Wright said: "We can tell you exactly where the data our domain-specific LLMs comes from." This is a dramatic difference from the response that ChatGPT gives you when you ask it where it gets its answers from. IT depts struggle with skills shortages despite Big Tech layoffs.IBM pauses counting its billions to trim Red Hat staff.IBM asks UChicago, UTokyo for help building a 100K qubit quantum supercomputer.Leaked Kyndryl files show 55 was average age of laid-off US workers.Not some garbage someone wrote up in a hurry to meet a deadline. When Lightspeed generates a particular Ansible Playbook – a reusable, simple configuration management and multi-machine deployment system – Red Hat says it's based on tested, high-quality data and code. That means, we're told, these LLMs have been built on data that Red Hat knows is correct. It uses natural language processing and integrates with Code Assistant to access IBM Foundation Models built on OpenShift, Red Hat's Kubernetes service.ĬTO Chris Wright told The Register in an interview that, unlike ChatGPT, which built its large language models (LLMs) on essentially all the publicly available data it could vacuum in, Red Hat's LLMs are curated and domain-specific. This generative AI service delivers, according to Red Hat, more consistent, accurate, and faster automation. This takes the Ansible DevOps program and extends it with IBM Watson Code Assistant.

This matured into Red Hat's first major AI success: Ansible Lightspeed.
