"If a worker wants to do his job well, he must first sharpen his tools." - Confucius, "The Analects of Confucius. Lu Linggong"
Front page > AI > Used an LLM? LAMs Are Coming Next, but They Need Work

Used an LLM? LAMs Are Coming Next, but They Need Work

Published on 2024-08-31
Browse:108

Used an LLM? LAMs Are Coming Next, but They Need Work

The rise of generative AI chatbots has popularized the term "large language model," the underlying AI tech working behind the scenes. Large language models (LLMs) generate output based upon a predicted set of language in response to the user input, making it appear as if the AI is capable of thinking for itself.

But LLMs aren't the only large models in town; large action models (LAMs) could be the next big thing in AI.

What's a Large Action Model (LAM)?

A LAM is an artificial intelligence system capable of understanding human input and performing a corresponding action. This is a slightly different approach to AI systems that solely focus on generating responses. The term "large action model" was first introduced by Rabbit Inc., developers of the rabbit r1 device. In the company's rabbit r1 launch video, it says a LAM is a new foundational model that helps bring AI from words to action.

LAMs are trained on large datasets of user action data; hence, they learn by imitating human actions or through demonstration. Through demonstration, LAMs can understand and navigate user interfaces of different websites or mobile applications and perform specific actions based on your instructions. According to Rabbit, a LAM can achieve this even if the interface is changed slightly.

You can think of LAMs as an extension of the existing capabilities of LLMs. Whereas LLMs generative text or media output based on user input by predicting the next word or token (You ask a question, and an LLM provides a text or media output), LAMs take it further by adding the ability to perform complex actions on your behalf.

What Can LAMs Do?

LAMs are all about performing complex actions on your behalf. However, the critical point to note is the ability to perform complex actions. This makes LAMs more helpful in doing advanced tasks, but it doesn't mean they can't perform simpler actions.

In theory, this means that you can, for instance, tell a LAM to do something on your behalf, like order a coffee from your nearby Starbucks, a ride from Uber, and even make a hotel reservation. It's therefore different from performing simple tasks like asking Google Assistant, Siri, or Alexa to turn on your TV or living room lights.

Under the hood, according to the vision shared by Rabbit Inc., the LAM is able to access the relevant website or app like Uber and navigate through its interface to take an action, say order a ride or cancel one if you change your mind.

LAMs Will Succeed LLMs, but They Are Not Ready (Yet)

The concept of LAMs is exciting, perhaps even more than LLMs. LAMs will be the future after generative AI, enabling us to be able to offset mundane tasks and focus on other fulfilling activities. However, as exciting as they seem, LAMs aren't ready yet.

The first commercial product that promised to leverage a LAM (the rabbit r1) didn't fully deliver on its marketing promise of performing actions on behalf of its users. The device failed so spectacularly at its core selling point that many first-hand reviews termed it fairly useless.

Even worse, an investigation by Coffeezilla, a YouTuber, in collaboration with a select group of software engineers with access to part of the r1's codebase, found that Rabbit used Playwright scripts to perform actions instead of a LAM. So instead of a device running a unique AI model, it was actually just running a bunch of If > Then style statements; far cry from the promised LAM.

If there's anything you can take from Rabbit's r1 device is, yes, the vision is there. However, work needs to be done before realization, so don't get excited yet.

Release Statement This article is reproduced at: https://www.makeuseof.com/what-is-a-large-action-model-lam/ If there is any infringement, please contact [email protected] to delete it
Latest tutorial More>

Disclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.

Copyright© 2022 湘ICP备2022001581号-3