How Social Media Trends Are Driving the Everyday Use of AI

Artificial intelligence is no longer the preserve of enterprise software suites or academic research labs. It’s showing up in makeup tutorials, recipe hacks, home organisation videos, and digital avatars—and it’s doing so fast. The shift has not been driven solely by technological breakthroughs, but by a very modern catalyst: social media.

Across platforms like TikTok, YouTube Shorts, and Instagram Reels, AI has emerged not just as a topic of fascination but as a tool embedded in viral content. In many cases, people are introduced to artificial intelligence not through education or workplace software, but via filters, voice clones, AI-generated storytimes, and automated content summaries. The result is that millions of users—many of whom wouldn’t describe themselves as “tech-savvy”—are encountering and using AI tools every day, whether they realise it or not.

One of the clearest signs of this shift is the popularity of AI-generated video content. On TikTok, videos demonstrating how to use ChatGPT for school essays, business pitches, or dating apps regularly reach millions of views. Filters and effects powered by AI—such as the “aged face”, “AI yearbook” and, "AI Barbie" trend—have gone viral globally. Other examples include voice-generator apps that recreate celebrity impressions and AI tools that turn text into stylised images. These trends often unfold at lightning speed, with new tools becoming ubiquitous within days, driven by nothing more than the virality of a few high-performing posts. However, there are also trends that may be seen as less ethical, controlling many controversies.

The Visibility and Expectation

This rise in AI usage through entertainment has had unintended but significant consequences. It has accelerated public familiarity with AI, normalised its presence in daily life, and raised expectations about what it can and should do. According to a recent study from the Pew Research Center, 58% of adults in the U.S. reported using some form of generative AI in 2024, up from just 14% the year before. Most had tried it out of curiosity, often inspired by something they’d seen on social media.

This groundswell of interest is undoubtedly good for awareness, but it’s also created new pressures—on platforms, on model developers, and on the infrastructure that supports these tools. For one, many people are interacting with general-purpose models, such as GPT-4 or Claude, for tasks that don’t require high-level complexity. From summarising emails to generating placeholder copy for Etsy listings, the average user doesn’t always need a multimodal supermodel—they need something lightweight, fast, and easy to use. And they need it without having to log in, subscribe, or share sensitive data.

What is means to have Accessible Models

This has led to growing interest in what some call “everyday AI”—models that prioritise accessibility over power, and that offer sufficient capability for the most common use cases. These include smaller language models that can run in-browser or on edge devices, as well as interfaces that don’t require a steep learning curve. The goal is not to replicate OpenAI’s most advanced models, but to support a broader base of users who now expect AI to be as available as a search engine.

In response, a number of projects have emerged offering simplified, browser-based AI tools. One example is ASI Mini. The tool is positioned as a demonstration of decentralised AI—an attempt to show that simple models can still be functional, private, and efficient, particularly for casual use. While the model’s capabilities are modest, it points to a direction in which AI could become more modular and accessible. It also raises important questions about infrastructure, openness, and who ultimately controls the AI tools that users rely on most frequently. They do also reflect a change in emphasis—away from sheer capability and toward control, customisation, and ease of use.

A Familiar Tool...

At the same time, the rapid adoption of AI through social media has introduced new complications. Many users treat generative tools as neutral utilities, unaware of the biases embedded in training data or the broader implications of content synthesis. As voice, image, and text generation becomes more automated, it becomes easier to replicate the appearance of authority without much substance. Some social media accounts now post entire feeds of AI-generated news commentary or fictionalised videos with no disclosure of synthetic origin. This risks not only misinformation, but also undermines trust in genuine content.

Furthermore, the platforms promoting AI trends are rarely designed with long-term impact in mind. Tools that go viral often do so because they’re entertaining or novel—not because they are safe, transparent, or accurate. In many cases, once the hype fades, the tools vanish too—leaving little room for ongoing development or critical oversight.

Who Builds the Tools That Stay?

Against this backdrop, the emergence of accessible, open-source models could play a stabilising role. If users are going to continue relying on AI in their personal and creative lives, they will need tools that are not only available, but also accountable. Small, local models may not dominate headlines, but they can fill a growing need for low-barrier entry points into artificial intelligence—especially for users who want to experiment, learn, or build without committing to paid platforms or centralised services.

In the end, the challenge is not just about making AI more powerful. It’s about making it feel personal, trustworthy, and responsive to human needs. Social media has introduced artificial intelligence to a global audience faster than any formal education campaign could have managed. Now, the task falls to developers, researchers, and communities to decide what that familiarity leads to.

Whether AI becomes a default tool in everyone’s daily routine—or a passing novelty—will depend not just on the models themselves, but on how easily people can access, understand, and shape them. ASI Mini suggest that future may be more open than we think—but it will need to be built with care.

Related Articles