What Does LLM Mean in Internet Slang?
In the world of tech journalism and online discussion boards, 2024 will definitely go down as the year in which absolutely everyone began talking about artificial intelligence (AI) – and if you’ve spent any time reading about AI, we can almost guarantee that you’ve seen the term “LLM” more than once. So, what does LLM mean in Internet slang? The answer is that it stands for “large language model,” which is an application of AI and is the most common form of AI that people are using right now.
If you enjoy going online to play with new tech toys – and even if you don’t – there’s no avoiding the fact that you probably already use LLMs or are at least exposed to their output almost every day. Reading this guide, you’re going to learn exactly what that means.
What Is an LLM, and How Does It Work?
An LLM is a form of artificial intelligence, and all modern AI works in much the same way: by scanning an enormous volume of information and analyzing the relationships within that information. Ars Technica has published a good explanation of how LLMs work, but it essentially boils down to this.
- The LLM scans an enormous volume of text and converts the words in the text to numbers called tokens. The LLM scans the tokens to learn how frequently they appear next to one another. The tokens and the relationships between them become the LLM’s training data, and the original text is discarded after analysis. Depending on the size of the LLM, the training data may comprise billions or even trillions of parameters and will consume tens of gigabytes of storage space.
- You use an LLM by typing a prompt, like “Write a two-paragraph summary of the Bill of Rights” or “What is a vape shop?”
- The LLM converts your prompt to tokens and compares the tokens to the parameters in its training data to determine the most likely words that should follow the words you entered. The result is the LLM’s output.
If you’ve ever used ChatGPT, you’ve used an LLM. Search engines such as Google and Bing are beginning to use LLMs to provide a portion of their outputs. An alternative LLM-based search engine called Perplexity has also become popular recently.
What Are LLMs Used For?
You can use an LLM to do almost anything involving the analysis or creation of text. Here are a few of the most common uses for LLMs.
- You can have an LLM scan a large document and give you a short summary of its contents.
- You can have an LLM read and email and draft a response.
- You can brainstorm with an LLM by asking it to give you recipe ideas or a list of potential things to do during an upcoming trip.
What Are the Limitations of an LLM?
If you’re going to use an LLM, it’s important to understand the technology’s limitations because LLMs are definitely better for some tasks than others. The most important thing to know is that an LLM can’t create anything truly original. All it can do is output the words that are most likely to follow your prompt based on what’s in its training data. Here are some of the scenarios in which you’ll really feel the limitations of an LLM.
- If you ask an LLM to produce long-form text like an article or original story, the result won’t be original because it’ll be based on the stories and articles that the LLM digested during the training process. An additional weakness is that LLMs tend to lose coherency in long-form outputs – the text will be grammatically correct, but it’ll meander and won’t say much of anything.
- An LLM works by representing words as numbers and analyzing the relationships between those numbers. It doesn’t truly understand language. It also isn’t an expert in any subject and doesn’t know whether its responses are actually true. An LLM will frequently make an incorrect statement and present it as fact. This is called a “hallucination.”
Why Are LLMs Controversial?
LLMs have the potential to be useful tools, but they’re also extremely controversial for several reasons.
- Because LLMs can produce enormous volumes of text with little effort on the user’s part, they can enable the distribution of spam and misinformation on a massive scale.
- The process of training an LLM frequently involves the consumption of copyrighted text that’s used without the author’s permission. This disproportionately affects small publishers and independent authors, who receive no compensation for the use of their work. Large publishers are better equipped to negotiate deals with AI companies for the usage of their content.
- Publishers frequently post LLM-created content online without identifying it as such. That’s because people generally don’t want to read content that wasn’t written by a human.
- LLMs have the potential to eliminate human jobs. A research paper from Goldman Sachs suggests that AI could impact as many as 300 million jobs. AI will create new jobs as well, but there will undoubtedly be a disparity because that’s the nature of automation.
How Can You Tell if You’re Reading Content Written by an LLM?
There’s a good chance that you’re exposed to content written by LLMs almost every day – and as we explained in the previous section of this article, the reason why publishers sometimes deceive their readers here is because people generally don’t want to read content written by AI – especially if it’s unhelpful and doesn’t serve any purpose except to push products.
Google has made recent updates to its algorithm with the intent of reducing the visibility of unhelpful AI-generated content, but there’s no avoiding the fact that you’re going to see it anyway because spam has always been a cat-and-mouse game between publishers and search engines.
People deserve to know the source of what they read online. So, how can you tell if the content that you’re reading was produced by an LLM? Here are some tips that can help.
- The best way to learn to recognize an LLM’s output is by playing with an LLM yourself. You’ll quickly discover that most LLM-generated text is very similar in structure. LLMs also often tend to overuse certain words and phrases.
- As we mentioned above, LLMs often tend to create long-form text that meanders and says very little of substance. An AI-created product review, for instance, might simply restate the product’s specifications without saying anything specific about what it’s like to actually use the product. This is another trait of AI-generated text that you’ll come to recognize as you see more of it.