A recent survey of 1,000 knowledge workers found that 67% of respondents report their companies are currently using artificial intelligence, with 56% of those organizations actively encouraging AI adoption. The study, conducted by Owl Labs, highlights a notable trend among Generation Z employees, 70% of whom identify as “heavily reliant” on AI tools for a wide range of tasks. This aligns with observations from OpenAI CEO Sam Altman, who described Gen Z’s relationship with ChatGPT as akin to having a “life adviser.”
While AI’s growing presence in the workplace offers many advantages, it raises an important question: Are younger workers- and their leaders- overestimating what large language models (LLMs) can do? What do today’s and tomorrow’s workforce leaders need to understand about AI’s true capabilities and its inherent limitations?
Gen Z’s Unique Interaction with AI
During a recent discussion at Sequoia Capital’s Ascent event, Altman noted a generational divide in AI use: “Older individuals tend to use ChatGPT as a replacement for Google search, whereas college students treat it more like an operating system.” According to TechCrunch, younger users often memorize complex prompts, save them on their phones, and engage AI with diverse and challenging queries.
Amanda Caswell, an AI specialist writing for Tom’s Guide, shared her personal experience with ChatGPT: “I’ve used it for everything from summarizing projects to managing anxiety. While it can’t replace human advice or therapy, it’s a valuable sounding board in tough moments.” This underscores AI’s role as a helpful assistant, though not a substitute for human judgment.
The Benefits and Risks of AI Dependence
AI can offer a powerful “second opinion” by drawing on vast data sets, including psychological theories and historical knowledge. However, experts caution against overreliance. It’s crucial to be mindful of what information is shared with AI and how its outputs are applied in professional settings. The real challenge lies in how users interact with AI, balancing its strengths with awareness of its weaknesses.
Expert Warnings: Understanding AI’s Boundaries
Process scientist Sam Drauschak explains, “AI lacks a comprehensive world model.” For example, when asked to interpret an image of a clock, AI doesn’t truly understand it but rather predicts patterns based on training data.
Stanford professor Louis Rosenberg, author of Our Next Reality, highlights that LLMs often stumble on simple tasks like telling time. Drawing from his own experience with dyslexia, Rosenberg offers insight into AI’s perspective: “Unlike humans who visualize from a fixed viewpoint, AI processes information from multiple angles simultaneously, like a cloud of perspectives. This makes directional concepts such as ‘clockwise’ challenging.”
While AI excels in some healthcare applications as analyzing tissue samples, where orientation doesn’t affect accuracy struggles with creative problem-solving and genuine originality. Although AI can generate novel text combinations and artistic works rapidly, its innovation remains limited by its reliance on existing data.
How Should We View AI?
Rather than seeing AI as an all-knowing entity, Drauschak suggests thinking of it as “more like an intern.” It synthesizes existing information but doesn’t create truly new ideas. Understanding this distinction is vital for leaders and workers alike to harness AI effectively without overestimating its abilities.