Everything posted by ResidentialBusiness
-
UK workers set to get unlimited compensation for unfair dismissal
Government plans to lift current limit of £118,000 View the full article
-
15 tiny habits that compound into major productivity gains
Small changes in routines can create significant improvements in how much gets accomplished in a day. Here, experts share 15 practical habits that can boost productivity and lead to better results in your work and personal life. Plan Your Week Every Friday Afternoon One small habit that’s made the biggest long-term difference in my productivity is making a plan every Friday for the coming week. Most people start their Mondays feeling behind before they’ve even begun. Their inbox dictates their day, and they spend valuable energy reacting instead of leading. I used to do the same thing—until I started ending each week with a simple Friday planning ritual. Before I wrap up on Friday, I take less than 30 minutes to look ahead at the next week. I review upcoming meetings, identify priorities, and map out where key tasks will fit. When I close my laptop, I know exactly what next week looks like — and I can actually enjoy my weekend because my brain isn’t spinning about what’s waiting for me. When Monday morning comes, I’m energized from actually resting over the weekend, and I hit the ground running with clarity and confidence. I’m not reacting; I’m executing a strategic plan. Over time, this habit has helped me stay focused on meaningful work, protect my time, and feel genuinely present—both at work and at home. It’s a small commitment that delivers massive peace of mind and productivity all week long. Samantha Lane, TEDx Speaker | Time Management Coach & Executive Trainer, Present and Productive | Origami Day Commit to an Earlier Bedtime There’s one habit that changed how I work, and it didn’t come from any kind of glamorous productivity tool. Funny enough, I figured it out by noticing a damaging pattern. My nights were setting me up to fail the next day. For a long time, I kept waking up tired. Not just sleepy-tired, but the kind where your brain feels heavy the moment you open your eyes. I’d load up on coffee, push through my schedule, and hope I’d somehow get more focused as the day went on. Which never happened. I run my own business, so there was always one more email, one more task, one more “quick thing” to do before bed. By the time I finally sat down to relax, the night was basically gone. And then I’d stay up way past 12 a.m. because I felt like I hadn’t had any time to myself. One night, I ended up going to bed around 10:30 p.m. without even planning it. The next morning, I felt completely different. I didn’t need to drag myself out of bed. My brain felt clear. I actually felt awake. I went to bed early the next night, too, just to see if it was a fluke. It wasn’t. The difference was massive. That’s when I realized how much my evenings were affecting everything. My nights were draining the version of me I needed in the morning. Sticking to that bedtime meant I had to stop working earlier. I picked 6 p.m. and held myself to it. At first, it was hard. I kept feeling like I should be doing something. I was antsy. But that one boundary changed how I worked during the day. I stopped wasting time on little tasks and started focusing on what actually mattered. Plus, I got my evenings back. I didn’t need to stay up late anymore because I finally had real time to wind down. It’s not exciting or trendy, but going to bed a few hours earlier (before midnight) changed everything for me. Out of all the things I’ve tried over the years and all the money I’ve spent on flashy “productivity tools,” this caused the biggest difference in how I feel and how well I work. Lisa Jeffs, CEO & Founder, Lisa Jeffs Toronto Life Coach & Toronto Executive Coach Remove Distractions and Focus on One Task The habit that’s changed everything for me is ruthless single-tasking. One task at a time, no exceptions. To make this work, I had to remove every distraction that tempted me to multitask. I used to run three monitors thinking more screen space meant more productivity. The opposite happened. Every open tab, software window, and notification pulled my attention away from the one task I needed to complete. I switched to a single screen and started wearing earplugs to block out noise. It sounds extreme, but it forces me to stay locked in on what actually matters. The results showed up fast. Projects that used to take days now get finished in hours because I’m not context switching every few minutes. Client work gets deeper attention, which means better outcomes and fewer revisions. My team noticed the difference too because I’m more present in conversations instead of half-listening while checking Slack. The hardest part isn’t the setup. It’s saying no to things that feel urgent but aren’t important. Once you get comfortable protecting that single-task focus, the productivity gains compound quickly. Xavier Tai, Founder, EasyScalers Process Action Items Immediately After Every Meeting One small habit that’s had an outsized impact on my productivity is blocking five to ten minutes after every meeting—or block of meetings—to process action items immediately. In most workplaces, meetings end and we rush straight to the next task. We talk about next steps, but then they get lost in the shuffle or buried on an endless to-do list. Taking even a few minutes of transition time changes everything. Here’s how I use it: anything that takes two minutes or less, I do right away—sending a follow-up email, scheduling the next call, or updating a document. Anything that takes longer than two minutes, I don’t put on a to-do list; I schedule it directly on my calendar for a specific day and time. This simple practice prevents small tasks from falling through the cracks and eliminates the mental clutter of wondering what I forgot. Over time, it compounds—projects move faster, communication stays tight, and I end the day with far fewer loose ends. It’s a tiny adjustment that creates exponential gains in focus, reliability, and calm. Marissa McKool MPH, Burnout Coach, The Public Health Burnout Coach Reset Your Workspace Every Evening Most people lose tomorrow because they don’t close today properly. That’s why I swear by a habit I call “The Reset.” Every evening before I close my laptop, I take 10 minutes to reset my workspace, my inbox, and my head. It sounds simple, but it has been a game changer. I clear out the clutter, finish any two-minute tasks, and write down the three most important things I’ll tackle first the next morning. Then I stop working. Because of this, I start every day on the front foot and not playing catch-up. I know what matters, my desk is clear, and I’m not wasting that first hour reacting to whatever is shouting loudest in my inbox. Before I started doing it, I would often end the day in chaos with tabs open everywhere, half-finished thoughts, and energy well and truly spent. The next morning was always about reassembling my focus. Now those 10 minutes buy me hours of clarity. Sean McPheat, Founder & CEO, MTD Training Document Recurring Processes as You Complete Them One habit that may seem small but made a huge impact not only on my productivity but also on how smoothly our operations run is creating standard operating procedures as I go. In our overall operations, there are always recurring tasks like onboarding new hires, processing orders, generating reports, approving content, and managing communications with suppliers. When I was getting started, I always found myself re-explaining the same process or digging through my emails to remember how I did something the last time. It ended up being mentally draining and very inefficient. That’s when I started to make it a rule: if I have to do something more than twice, it needs to have an SOP. So, whenever I complete a certain process, I take a couple of minutes to document it, taking note of each step, the tools I used, and the templates needed. It doesn’t have to be 100% perfect immediately—it just has to exist, and I just refine it as I go along. Over time, that documentation evolves into a solid and scalable process. The impact of this productivity hack has been significant. New hires/team members can get up to speed faster and make fewer mistakes, and I spend less time teaching the entire process and more time focusing on making strategic decisions. Jessica Bane, Director of Business Operations, GoPromotional Take Two-Minute Pauses Between Major Tasks The habit that changed my productivity wasn’t about doing more; it was about transitioning better. For years, I moved through my day as if I were being chased. I had back-to-back meetings. I switched quickly from strategic planning to operational tasks. I jumped from tough conversations to designing training content. There was no pause or transition, just constant forward motion. I thought I was being efficient, but I was losing focus everywhere. The change came during my time at AWS. I balanced UX research, EQ-centered leadership development design, and implementing generative AI solutions, often all in the same afternoon. I noticed my best work happened when I had natural breaks between tasks, but my calendar rarely allowed for that. So, I built it in: a two-minute reset between each major task or meeting. I did not scroll social media or check emails. Instead, I took a genuine mental break. I stepped away from my screen, took three deep breaths, and asked myself: What does the next task really need from me? Sometimes the answer was creative energy; other times, it was focused analysis or empathetic listening. This habit wasn’t just about resting; it was about recalibrating so I could engage with each task using the right mindset, not just leftover energy from before. The impact was immediate and noticeable. When I led research on automating training processes, those two-minute resets helped me shift from technical research to strategic conversations with stakeholders. I could be fully present in each context rather than dragging the last conversation into the next one. My error rate dropped. I stopped rereading emails three times because I was skimming distractedly. I caught mistakes before they became problems. My team noticed I was more responsive to nuances in conversation. The productivity gain wasn’t about fitting more into my day; it was about focusing fully on what was already there. What makes this habit sustainable is that it’s small enough to feel easy but substantial enough to create a real mental reset. You don’t need a meditation app, a special space, or permission. You just need to stop treating your attention like it’s an endless resource and start treating transitions like the productive work they truly are. Your brain isn’t a machine that switches contexts instantly without cost. Respect the transition. Your focus will thank you. Alinnette Casiano, Leadership Strategist, Growing Your EQ Spend 15 Minutes on Your Critical Task I started every workday with exactly 15 minutes on my most critical task, no matter what. Just the first 15 minutes, not the complete thing. It’s simple neuroscience: when you start small and keep going, your anterior cingulate cortex, which controls switching tasks and starting them, gets ready. After two weeks, the neural connection gets stronger, and what used to seem like climbing a mountain becomes second nature. One executive I trained was overwhelmed with leadership duties and hadn’t written a strategic memo in months. We made one rule: every morning for three minutes, just write down one thought. She finished her whole strategy framework in 90 days without once feeling exhausted. She wasn’t suddenly more disciplined; her brain had only changed how it started tasks to make them seem less threatening. The underlying lesson is that you don’t get more done by working harder; you get more done by getting your brain to believe that starting is safe and easy. Sydney Ceruto, Founder, MindLAB Neuroscience Start Each Day with Exercise and Deep Work The single most impactful habit I’ve maintained for two decades is The Habit of Winning the Morning. It’s not about the alarm time; it’s about preloading your day with uninterrupted, high-leverage work. I’m at my desk by 7:00 a.m., having already exercised and cleared my personal mental clutter. This routine engineers a psychological and professional head start that lasts all day. Here is the measurable value this habit delivers: Gain a 2-Hour Head Start on Your Peak Performance: By getting to my desk early, I consistently create a daily buffer of focused, deep work that prevents me from playing reactive catch-up for the rest of the day. Build Mental Resilience Through Physical Movement: Dedicating a full hour to exercise delivers a sustained surge of chemical energy and mental clarity, ensuring I approach high-stakes problem-solving with maximum focus. Achieve Consistent Momentum and Confidence: Starting the day with intentional wins (exercise, deep work) generates a sense of control and efficacy that fuels an energetic and proactive approach throughout the entire workday. Thomas Powner, Executive Career Management Coach * Recruiter * Resume Writer * Career Keynote Speaker, Career Thinker Inc. Brain Dump Weekly Plans to Your Assistant Every Monday morning on my drive, I talk out loud to my custom GPT that acts as my personal assistant. I brain dump everything for the week: projects, errands, client follow-ups, content, even small admin. My assistant organizes it by day of the week and by category, flags blind spots, and asks clarifying questions I usually forget. When we finish, it gives me a single structured list. I move that list into Google Tasks, and Zapier syncs it to my Notion to-do database so my workspace stays current. Each morning, Google’s contextual view with Gemini gives me a quick summary of what matters today and pulls helpful context from Gmail and Drive. The result is simple. I start the week with a clear plan, my tools stay in sync, and I stop carrying the entire to-do list in my head. Fewer dropped balls, better prep for calls, and more focus time because I’m not resorting priorities all day. Gloria Espina, Recruitment Systems Strategist, Recruitment Gal Write Worries Down and Store Them Away When life or work starts to feel overwhelming, I turn to a simple practice I call “the box.” It’s a small wooden box that sits underneath my desk, not for storage, but for clarity. Whenever I’m consumed by stress or distraction, I write each worry on a piece of paper, fold it, and place it inside. Once the lid closes, that thought has been acknowledged and contained. It no longer controls my focus. Weeks or months later, I open the box and read those same notes. Almost without fail, the things that once felt so urgent never materialized, or they resolved with far less impact than I feared. That realization has fundamentally shifted how I manage my energy and productivity. I’ve learned that clutter in your mind is just as costly as clutter in your calendar. This ritual helps me quiet the noise so I can channel energy toward meaningful work, which moves the business forward. By giving my worries a place to live outside my head, I create space for clear thinking, better decisions, and focused execution. It’s a small habit, but one that’s helped me lead with more presence and produce more with intention—not exhaustion. Felicia Gallagher, Founder | CFO | Finance Strategist, ThreeStone Solutions Replace Your Phone with a Dedicated Alarm I stopped using my phone as my alarm device. I started this practice after I realized that, while convenient to have one device next to my bedside, as soon as I woke up to turn the alarm off, I could not help but see several notifications that I had received overnight. Even if I did not look at the notifications, within seconds of waking up my brain was off to the races. The fact that I knew there were messages on my phone was enough to fill my mind with an unhealthy cocktail of curiosity, anxiety, and even fear of what might have happened overnight and needed my immediate attention. Needless to say, whatever recovery and relaxation benefits I had gained with sleep left my mind within seconds. All of this became much worse when I decided to actually read any of the notifications. Switching out my phone as an alarm has saved me from getting the instant info and data hit that would provoke nervous energy. This in turn has allowed me additional mental runway before the brain gets fired up with external data. It took some serious practice to make this transition. Now the anxiety levels are much lower getting out of bed and my ability to thoughtfully engage with business issues on my phone has gone up. It has also helped me be more present with the family and be able to support their early morning needs without me being distracted. Rohit Bassi, Founder & CEO, People Quotient Choose One High-Impact Task Each Morning “The main thing is to keep the main thing the main thing,” said Stephen R. Covey. I got inspired by that quote long ago. Because as leaders, we are always surrounded by priorities, requests, and opinions. But not everything that comes across deserves our attention. So, I adopted this one simple habit inspired by Covey. Every morning, I decide on one task that will make the biggest impact that day. This daily clarity greatly reduces the feeling of overwhelm, alongside giving me room to handle the unexpected without losing sight of what truly matters. It helps me stay intentional. And I end each day with a real sense of accomplishment. Because when the main thing stays the main thing, everything else starts falling into place. Sandeep Kashyap, CEO & Founder, ProofHub Carve Out Dedicated Calendar Blocks Time blocking. It’s not enough just to have a to-do list to be productive because different tasks require different amounts of time and energy. When you carve out time on your calendar, you ensure that there’s enough time in your day to get the right things done. You can also prioritize important tasks to be done first—and at times when you’re at your best. Time blocking pairs well with Cal Newport’s concept of deep work. Save time for yourself to get quality work done, not just a quantity of shallow work. This goes for both professional and personal tasks. I adopted time blocking into my own workflow about six years ago, and it’s been invaluable. I take time every week to set my schedule, and then I don’t have to worry about missing things. Robert Carnes, Marketing Director, GreenMellen Build Daily Rhythm Through Four Reflection Moments For much of my career, I believed productivity meant maximizing output, earlier mornings, longer hours, and tighter schedules. Over time, I learned that real productivity isn’t about doing more. It’s about aligning more often. The most effective habit I’ve built doesn’t require a new system or app. It’s a simple reflection routine that takes just three to five minutes at a time, yet it’s completely changed how I lead, think, and show up for the people who count on me. Morning Reflection—Set Intention Each morning before leaving home, I take a few quiet minutes to ask, What deserves my focus today? That one question sets the tone for the day. It helps me focus on what truly matters instead of reacting to noise. Many of my clients do this same reflection when they first get to their office before opening their email or going into meetings. Whether at home or at work, that intentional pause turns a busy day into a focused one. Pre-Meeting Reset—Regain Presence Before important meetings or tough conversations, I take one to three minutes to reset. A deep breath, a quick stretch, and the reminder: Be fully present here. That short pause helps me show up calm, clear, and attentive. It helps me listen better, respond more thoughtfully, and lead with steadiness instead of urgency. End of Day Reflection—Create Closure At the end of the day, I take a few minutes to look back and ask, What moved forward today, and what needs my attention tomorrow? That simple check-in helps me close the loop mentally. It keeps unfinished thoughts from following me home and allows me to be more present with my family. The result is better rest, stronger relationships, and a clear head for tomorrow. Evening Reflection—End with Gratitude Before bed, I take a moment to ask, What am I grateful for, and what did I learn today? That question helps me reset and end the day. These four moments—morning intention, pre-meeting reset, end of day closure, and evening gratitude—have become my daily rhythm. They’ve helped me lead with greater presence, make clearer decisions, and stay grounded when things get complex. Real productivity isn’t built in big bursts of effort. It’s built in quiet, consistent moments of reflection that reconnect what you do with who you want to be and how you want to show up for others daily. Gearl Loden, Leadership Consultant/Speaker, Loden Leadership + Consulting View the full article
-
Bing Tests New Search Bar With Advanced Tools Menu
Microsoft is testing a new search bar with a more powerful tools menu button. This tools menu is larger, has a plus sign and gives you options to click search with your voice, search with an image and even make an image.View the full article
-
Hershey’s innovation lab just created its own Dubai chocolate bar
Hershey’s has finally jumped on the Dubai chocolate trend, and it typifies the intentional approach the company is taking to viral candy. The Hershey’s Company announced it’s releasing a limited-edition Hershey’s Dubai-Inspired Chocolate Bar that adds green pistachio filling and kadayif pastry to a classic break-apart Hershey’s chocolate. They’re treating the release like a sneaker drop: only 10,000 bars are being released. “We don’t chase every trend, but this one was big enough, and there was an opportunity to do it in a Hershey way,” Dan Mohnshine, Hershey’s vice president of demand creation strategy and brand development, tells Fast Company. To make the bars, Hershey’s flew a small team to Italy to source pistachio and kadayif cream. The company reviewed nine formulas before deciding on the recipe they’re using, which was chosen for its balance of crunch and salt to complement the milk chocolate. “The ingredients and filling we developed are exclusive to the Hershey’s Dubai-inspired bar—you won’t find this exact combination anywhere else,” Mohnshine says. The bars will be available for $8.99 at the Hershey’s Chocolate World Times Square on Thursday or online through Gopuff orders in New York City, Philadelphia, or Chicago. It was a roughly two-month process from late July to September to get the bar from concept to reality, and all 10,000 bars were produced in the company’s Hershey, Pennsylvania, research and development center. The candymaker has a “Velocity Lab” capability that Mohnshine says is “all about taking ideas to consumers quickly by embracing agility, an iterative mindset, and rapid prototyping based on trend signals.” For the Hershey’s Company, choosing when to jump on a trend depends on whether the candymaker believes it can provide a unique offering and value. Hershey’s is late to the food trend, which went viral on TikTok beginning in 2023. Shake Shack introduced a Dubai Chocolate Pistachio Shake in June, and Lindt and Ghirardelli released their takes on the trend in July and October, respectively. Demand for pistachio broke the supply chain. Still, that hasn’t hurt the company’s bottom line. As a limited-edition drop, Hershey’s Dubai-inspired bar is just a sugar rush in its overall sales. Though the company reported on its October earnings call that Halloween sales were disappointing, which CEO Kirk Tanner blamed in part on the day of week, it’s seen a 6.5% increase of consolidated net sales. Though just 10,000 bars will be released, Mohnshine says “never say never.” “We’re really excited to hear what our fans think about Hershey’s version of a Dubai-inspired chocolate bar,” he says. View the full article
-
Nations Direct agrees to settlement over 2023 data breach
The wholesale lender fell victim to a data incursion two years ago in a months-long period marked by several high-profile cybersecurity incidents. View the full article
-
How Fannie, Freddie product mix could shift if they're uplisted
The government-sponsored enterprises plan to back off competition with the FHA and some think they'll incentivize different loan types. Part 4 in a series. View the full article
-
Why most New Year’s resolutions fail—and what that says about leadership habits
Yes, it’s that time of year again: when we don’t just wrap up one chapter but start anticipating the next, determined to begin with something that resembles a clean slate. The ritual is familiar: a little reflection, a little optimism, and a list of promises to our future selves. New Year’s resolutions are extremely popular, particularly relative to their low execution rate. According to a recent 2025 YouGov survey, 31% of U.S. adults can be expected to set at least one resolution for the new year–with the highest participation among younger adults (under 30), of whom 58% say they will make a resolution. Saving money emerges as the single most common New Year’s resolution among Americans (26%), followed closely by goals related to health and well-being: 22% plan to improve physical health, 22% want to exercise more, another 22% aim simply to “be happier,” and 20% intend to eat healthier. The benefits without the work New year’s resolutions reveal a painful truth about change, namely: everybody seems to love change, until they have to do it. Indeed, even when people say they want to change, what they actually want is to have changed–in other words, to enjoy the benefits of having changed or having achieved the desired transformation, but without the painful and effortful work of undergoing the process to achieve it. We are, in essence, creatures of habits, and though every habit was once a new behavior, it is hard to unlearn behavioral patterns and dispositions that have become defining habits. In the famous words of Samuel Johnson, “the chains of habit are too weak to be felt until they are too strong to be broken.” Although New Year’s resolutions may seem like trivial once-a-year occasions, they paint a bleak picture about our capacity to change. Consider that these are typically borne out of a genuine desire to improve ourselves, and are motivated by intrinsic or at least personal motives, rather than people telling us to change or evolve. In theory, this should put us in an ideal position to achieve our goals, since all change is fundamentally the product of our own desire or will to change–that is, the only way to get someone to do something is to get them to want to do something. Hard to keep In practice, however, we do a dismal job holding our resolutions and are generally likely to break them and then recycle them in future years. In a longitudinal study of 200 resolvers, 77% had maintained their resolutions after one week, but this dropped to 55% after one month, 43% after three months, 40% at six months, and only 19% still held to them after two years. Another study provides more reasons for optimism: it tracked 159 people making New Year’s resolutions and 123 similar non-resolvers for six months. Both groups had comparable backgrounds and goals (mainly weight loss, exercise, and smoking cessation), but their outcomes diverged sharply: 46% of resolvers were still successful at six months, compared with just 4% of non-resolvers. Among resolvers, higher self-efficacy, greater readiness to change, and stronger change skills predicted success, and those who succeeded relied more on practical cognitive-behavioral strategies than on emotional or awareness-raising tactics. The authors conclude that New Year’s resolutions offer a valuable natural window into how real behavior change unfolds. The connection to organizational change That said, when we look at most organizational change interventions (especially the ubiquitous attempts to develop or “transform” leaders), there are even fewer reasons for optimism. Here’s why: (1) Leadership change interventions are rarely driven by internal desire. When organizations ask leaders to change, they usually want them to change in a specific way, aligned with the business agenda. This means the change is externally imposed rather than intrinsically motivated. Unsurprisingly, meta-analytic research shows that intrinsic motivation dramatically increases the success of behavioral change interventions, while externally imposed change often produces compliance without real transformation. (2) Measurable outcomes or quantifiable metrics are often lacking. Many leadership development programs still rely on vague perceptions of improvement or on self-reported progress, rather than objective before-and-after data. Organizations often over-index on participation, sentiment surveys, or anecdotal indicators, while ignoring behavioral KPIs or longitudinal performance outcomes. Success becomes conflated with completion, and leaders often receive credit for attending a program rather than actually changing. (3) Personality often stands in the way of change. Most leadership behaviors that organizations want leaders to change, such as listening more, dominating less, delegating better, becoming less impulsive, or being more emotionally regulated, are deeply rooted in personality. And personality is highly stable. Leaders don’t micromanage, interrupt, or avoid conflict because they “forgot” how to behave differently; they do so because these tendencies are their psychological defaults. Asking someone to act against their personality is rarely sustainable unless supported by strong motivation, environmental scaffolding, and ongoing reinforcement. (4) The environment often pushes leaders back to old habits. Even when leaders make progress, the organizational context often pulls them back. If incentives, culture, role expectations, team dynamics, and senior-leader behaviors remain unchanged, new habits cannot survive. A leader may return from a development program eager to delegate more, only to find that the culture rewards heroic overwork, rapid responsiveness, and “being in control.” In such contexts, reversion to old habits is almost guaranteed. What works And yet, well-designed leadership development interventions do work, typically yielding average improvements of around 30% for approximately 30% of leaders. Crucially, they tend to share certain characteristics: (1) They are enhanced and supported by a coach. Coaching meta-analyses show significant positive effects on behavioral change, goal attainment, and leadership effectiveness. Coaches help leaders translate insight into action, apply new behaviors in context, and stay accountable. (2) They rely on high-quality, evidence-based coaching and expert change professionals. The expertise of the coach matters. Effective coaches draw on validated psychological frameworks, provide accurate diagnosis, challenge constructively, and avoid the vague platitudes common in low-quality coaching. (3) They ensure the organizational context and incentives align with the change expected. If new behaviors are not reinforced (or worse, if the organization rewards the opposite behaviors) change will not stick. Structural alignment (incentives, culture, team expectations) is a critical amplifier. (4) They leverage the science of behavioral change. Small habit formation, nudges, friction reduction, implementation intentions, environment design, and regular prompts all increase the likelihood that new behaviors will persist. (5) Most importantly, they select the right leaders to invest in. Coachability, which largely boils down to openness to feedback, willingness to self-reflect, humility, and a genuine desire to improve, is one of the strongest predictors of leadership development ROI. Whatever you think of personalities like The President or Musk, it’s clear they have little appetite for being coached. In contrast, leaders who are curious, self-aware, and eager to grow are far more likely to change. Viewed through this lens, New Year’s resolutions and leadership development are two versions of the same psychological phenomenon: most people want the outcomes of change without the discomfort of transformation. Leaders, like the rest of human beings, start the year with good intentions, but only a minority translate those intentions into new habits. Perhaps the most important New Year’s resolution for leaders, then, is not to “change everything,” but to commit to the small, unglamorous, sustained behaviors that actually make change possible. After all, lasting leadership growth—like lasting personal change—is less about setting resolutions and more about building habits that survive past January, and perhaps even until the next decade. View the full article
-
Why dynamic pricing is becoming the rule, not the exception
Changing prices for what the market will bear has long been a staple of pricing for everything from airplane seats to a gallon of gas to hotel rooms. Indeed, an entire field of so-called “dynamic pricing” exists to figure out how to extract the most profit from the most willing customers has now emerged. But we’re at an inflection point now in which such practices are going from the exception, and for relatively few items, to the norm. The regulatory framework is at the moment right in the midst of figuring out what the guardrails will be. The Intermediary Industrial Complex Remember when a gallon of milk cost the same for everyone who walked into the store? That quaint notion is rapidly becoming as obsolete as the paper price tag itself. Retailers frequently use people’s personal information to set targeted, tailored prices for goods and services—from a person’s location and demographics, down to their mouse movements on a webpage. We’re witnessing the emergence of a pricing ecosystem where your browsing history, zip code, and even the speed at which you scroll through a web page can determine what you pay. Companies like Revionics, PROS, and Bloomreach are building the infrastructure for a world where pricing becomes as personalized as one’s Netflix recommendations. The Federal Trade Commission found that the intermediaries worked with at least 250 clients that sell goods or services ranging from grocery stores to apparel retailers. This isn’t a niche practice—it’s becoming the operating system for modern commerce. Consider this scenario from the FTC’s findings: A consumer who is profiled as a new parent may intentionally be shown higher priced baby thermometers on the first page of their search results. This opens the door to algorithmic exploitation of vulnerability. When your recent searches reveal a sick child, the system is programmed to catch you at the moment you’re likely to be least price-sensitive. The regulatory response is crystallizing around three distinct vectors. First, consumer protection law challenges the fundamental fairness of charging different prices to different people for identical products. The Robinson-Patman Act, dormant for decades, may find new life in addressing digital-age price discrimination. It was originally intended to help small vendors compete with large ones by forcing everybody to compete on the same playing field when it came to pricing, eliminating predatory pricing by large players. Second, those who support stronger privacy laws question whether using granular personal data for pricing decisions constitutes an unfair practice. The Electronic Frontier Foundation argues that predatory pricing is only possible because our privacy laws are so weak. Americans, they suggest, deserve to know whether businesses are using detailed consumer data to deploy surveillance pricing, for instance, charging higher prices to those already in the parking lot (as Target has been accused of doing) or to those with fewer alternative options, as Staples has been accused of doing. Third, antitrust concerns emerge as companies with the power and resources to engage in surveillance pricing may trigger competition concerns. Only the largest companies have sufficient data to perfect these systems, potentially creating insurmountable competitive moats. Further, the algorithms used to set prices can act as signals that allow firms to effectively collude, even if they don’t do so explicitly. With everything else becoming dynamic, perhaps the era of fixed prices is over Here’s the strategic contradiction companies must navigate: The same data capabilities that enable personalized service—the holy grail of customer experience—also enable personalized exploitation. Every company talks about “customer-centricity,” but surveillance pricing reveals the tension between serving customers better and extracting maximum value from them. Forward-thinking companies might find competitive advantage in explicitly rejecting surveillance pricing. “Same price for everyone” could become the new “organic” or “fair trade”—a trust signal that commands its own price premium. Costco’s membership model already embodies this principle: pay to enter a space where prices are transparent and universal—and Costco has long set a ceiling on how much margin it extracts from its member-customers. We’re in a brief window where surveillance pricing is technologically possible but not yet legally constrained. Companies experimenting with these tools should assume that window will close—the only question is how quickly and how completely. View the full article
-
Visa’s next World Cup move: Soccer-themed art
Just before Friday’s draw for the FIFA men’s World Cup 2026 group stage, Visa is launching an artistic update to its sponsorship of the tournament. The brand just announced a new partnership with Pharrell Williams’ Joopiter auction and e-commerce platform, on a new World Cup-themed art collection, featuring 20 different artists from six continents. The collection aims to show how creativity drives commerce—and how artists are the entrepreneurs shaping communities and culture around the world. Visa has unveiled the first five pieces in the collection at an exclusive Miami showcase called “The Art of the Draw,” hosted by multidisciplinary creator KidSuper. The showcase features the works of artists Darien Birks, Nathan Walker, Cesar Canseco, Ivan Roque, and Rafael Mayani. The rest of the collection is set to come before the tournament kicks off in June. Visa chief marketing officer Frank Cooper III says this collection embodies the brand’s overall approach of using its sponsorships to not just leverage the fan experience around an event like the World cup, but actually add to it. “It’s allowing artists to do what they do best, which is to help us to see things differently and to provoke conversation in ways that may not get provoked through just casual interaction,” says Cooper. “So for me, this opens the aperture of how you can think about the World Cup and football.” Add value, not ads Visa first signed on as a World Cup sponsor back in 2007. This will be Cooper’s second tournament with the brand, having joined shortly before the 2022 World Cup. Back in 2023, in one of his first interviews as CMO, Cooper told me that one of the things he really wanted to do around sponsorship was to move away from what he called “cultural adjacency,” borrowing equity and trying to get a halo off that, and creating awareness by being the proud sponsor of something. “I’m not dismissing that,” he said. “I think it has a role, but can we actually add value to fans’, the athletes’, or artists’ experience? Can we figure out ways that are less interruptive and more about creating momentum around things people want to do? Otherwise, you start to fade into the background and become wallpaper if people see it too much. There is value in traditional sponsorship, but there’s more value in delivering something that would not happen unless we were there.” That’s the playbook. Since then, Cooper has led the brand into music and sports, with a pre-Paris Olympics Post Malone concert at the Louvre, and Benson Boone at The Kennedy Space Center’s Rocket Garden, as well as compelling projects in Formula 1, NFL, and the Olympics. “The mindset that we have is less of, ‘Can I interrupt an experience or insert ourselves into an experience in a way that disrupts people?’ And more of, ‘Can I create original intellectual property that actually makes the experience better?’” he says. This is where supporting artists from around the world to create a collection that shows the connection between creativity and sports culture comes in. “The Art of the Draw” is just the latest piece of work Visa has done around next summer’s World Cup, and it won’t be the last. So far, the brand has given its cardholders exclusive early access to World Cup tickets through its Visa Presale Draw back in September. In June, the brand opened the first of six soccer parks throughout the United States in San Francisco, in partnership with Bank of America and Street Soccer USA. And in September, Visa signed Barcelona and Spain star Lamine Yamal as a global ambassador. Logo Soup Major sports events like the World Cup have long been drenched in ads from sponsors, from logos on the field to exclusive products and services at the games. Cooper says there is still value in this type of traditional brand presence, but what’s changed over the years is what else is required to give that presence value. “What has changed is that there’s very little value given to just the pure advertisement,” says Cooper. “It becomes like logo soup. What is probably the most important thing is that fans are asking for the brands that they care about the most, who are connected to these events like the World Cup, to understand the cultural nuances. If you’re going to be involved, you better understand it.” This is where the level of detail in a brand’s involvement, particularly in fan culture, is key. As Men In Blazers cofounder Roger Bennett told me in August, brands need to get involved in soccer early and often, in order to be more than a tourist at the World Cup in fans’ eyes. Cooper knows this, too. He knows the difference between churning out generic promo T-shirts for fans, and teaming with a local designer for a limited-edition drop. That’s also the strategy behind “The Art of the Draw.” “What I’m seeing is that fans increasingly are really, really smart about which brands understand the cultural nuances of the activity that they’re engaged in,” he says. “And so what we are trying to do is become much more aware of those cultural nuances, how to tease them out, and how to produce something that actually delivers value in that context.” View the full article
-
Witkoff to meet Ukrainians after fruitless Putin talks
Donald The President’s special envoy to brief Kyiv’s chief negotiators on Kremlin meetingView the full article
-
5 Reasons To Use The Internet Archive’s New WordPress Plugin via @sejournal, @martinibuster
The Internet Archive's new WordPress plugin provides an easy way to bring the Archive into the SEO workflow The post 5 Reasons To Use The Internet Archive’s New WordPress Plugin appeared first on Search Engine Journal. View the full article
-
Nvidia’s Kimberly Powell is applying AI to expedite drug discovery
Bringing a new drug to market usually requires a decade-long, multibillion-dollar journey, with a high failure rate in the clinical trial phase. Nvidia’s Kimberly Powell is at the center of a major industry effort to apply AI to the challenge. “If you look at the history of drug discovery, we’ve been kind of circling around the same targets for a long time, and we’ve largely exhausted the drugs for those targets,” she says. A “target” is a biological molecule, often a protein, that’s causing a disease. But human biology is extraordinarily complex, and many diseases are likely caused by multiple targets. “That’s why cancer is so hard,” says Powell. “Because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently.” Nvidia, which in July became the first publicly traded company to cross $4 trillion in market capitalization, is the primary provider of the chips and infrastructure that power large AI models, both within the tech companies developing the models and the far larger number of businesses relying on them. New generative AI models are quite capable of encoding and generating words, numbers, images, and computer code. But much of the work in the healthcare space involves specialized data sets, including DNA and protein structures. The sheer number of molecule combinations is mind-bogglingly big, straining the capacity of language models. Nvidia is customizing its hardware and software to work in that world. “[W]e have to do a bunch of really intricate data science work to . . . take this method and apply it to these crazy data domains,” Powell says. “We’re going from language and words that are just short little sequences to something that’s 3 billion [characters] long.” Powell, who was recruited by Nvidia to jump-start its investment in healthcare 17 years ago, manages the company’s relationships with healthcare giants and startups, trying to translate their business and research problems into computational solutions. Among those partners are 5,000 or so startups participating in Nvidia’s Inception accelerator program. “I spend a ton of my time talking to the disrupters,” she explains. “Because they’re really thinking about what [AI computing] needs to be possible in two to three years’ time.” This profile is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article
-
Amazon takes on AI’s biggest nightmare: Hallucinations
Up in the Cascade Mountains, 90 miles east of Seattle, a group of high-ranking Amazon engineers gather for a private off-site. They hail from the company’s North America Stores division, and they’re here at this Hyatt resort on a crisp September morning to brainstorm new ways to power Amazon’s retail experiences. Passing the hotel lobby’s IMAX-like mountain views, they filter into windowless meeting rooms. Down the hall, the off-site’s keynote speaker—Byron Cook, vice president and distinguished scientist at Amazon—slips into an empty conference room to have some breakfast before his presentation. Cook is 6-foot-6, but with sloping shoulders that make his otherwise imposing frame appear disarmingly concave. He’s wearing a rumpled version of his typical uniform: a thick black hoodie and loose black pants hanging slightly high at the ankles. An ashy thatch of hair points in whatever direction his hands happen to push it. Cook, 54, doesn’t look much like a scientist, distinguished or otherwise, and certainly not like a VP—more like a nerdy roadie. “They don’t know who I am yet,” he tells me between bites of breakfast, referring to the two dozen or so engineers now taking their seats. Despite his exalted title, Cook has faced plenty of rooms like this in his self-made role as a kind of missionary within Amazon, spreading the word about a powerful but obscure type of artificial intelligence called “automated reasoning.” As he’s done many times before, Cook is here to get the highly technical people in that room to become believers. He’s championing an approach to AI that isn’t powered by gigawatt data centers stuffed with GPUs, but by principles old enough to be written on papyrus—and one that’s already positioning Amazon as a leader in the tech industry’s quest to solve the problem of hallucinations. Cook doesn’t have a pretalk ritual, no need to get in character. He’s riffing half-seriously to a colleague about the pleasures of riding the New York subway in the summertime when someone mentions that the session is about to begin. He immediately drops his fork and strides out. His next batch of converts awaits. When ChatGPT hit the world with asteroid force in November 2022, Amazon was caught flat-footed just like everyone else. Not because it was an AI laggard—the tech giant had recently overhauled nearly all of its divisions, including its massive cloud-computing arm, AWS, to leverage deep learning. Amazon also dominated the smart-home market, with 300 million devices connected to Alexa, its AI-powered assistant. It had even been researching and building large language models, the tech behind ChatGPT, for “multiple years,” as CEO Andy Jassy told CNBC in April 2023. But OpenAI’s chatbot changed the definition—and expectations—of AI overnight. Before, AI was still a mostly invisible ingredient in voice assistants, facial recognition, and other relatively narrow applications. Now it was suddenly seen as a prompt-powered genie, an infinitely flexible do-anything machine that every tech company needed to embrace—or risk irrelevance. Less than six months after ChatGPT’s debut, Amazon launched Bedrock, its own AWS-hosted generative AI service for enterprise clients, a list that currently includes 3M, DoorDash, Thomson Reuters, United Airlines, and the New York Stock Exchange, among others. Over the next two years, Amazon injected generative AI into product after product, from Prime Video and Amazon Music (where it powers content recommendation and discovery tools) to online retail pages (where sellers can use it to optimize their product listings), and even into internal tools used by AWS’s sales teams. The company has released two chatbots (a shopping assistant called Rufus and the business-friendly Amazon Q), plus its own set of “foundation models” called Nova—they are general-purpose AI systems, akin to Google’s Gemini or OpenAI’s line of GPTs. Amazon even caught the industry fever around so-called AGI (artificial general intelligence, a yet-to-be-achieved version of AI that does any cognitive task a human can) and in late 2024 launched AGI Lab, a flashy internal incubator led by David Luan, an ex-OpenAI researcher. Still, none of it captured the public’s imagination like the stream of shiny objects emitted by OpenAI (“reasoning” models!), Anthropic (chatbots that code!), and Google (AI Overviews! Deep Research!). Like Apple, Amazon was unable to turn its early lead in AI assistants into an advantage in this new era. Alexa and Siri simply cannot compete. But maybe that has been for the best, because 2025 was the year that AI’s sheen suddenly started to come off: GPT-5 fell flat, vibe coding went from killer app to major risk, and an MIT study rattled the industry by claiming that 95% of businesses get no meaningful return on their AI pilot projects. It was against this backdrop—“the summer AI turned ugly,” as Deutsche Bank analysts called it—that Amazon publicly released Automated Reasoning Checks, a feature promising to “minimize AI hallucinations and deliver up to 99% verification accuracy” for generative AI applications built on AWS. The product was Cook’s brainchild; in a nutshell, it snuffs out hallucinations using the same kind of computerized logic that lets mathematicians prove 300-page-long theorems. (In fact, a 1956 automated reasoning program called “Logic Theorist” is considered by some experts to be the world’s first AI system, finding new and shorter versions of some of the proofs in Principia Mathematica, one of the most fundamental texts in modern mathematics.) Sexy, it ain’t. Still, Swami Sivasubramanian, one of Amazon’s highest-ranking AI executives, who serves on Jassy’s “S-team” of direct advisers, was impressed enough to call Automated Reasoning Checks “a new milestone in AI safety” in a LinkedIn post. Matt Garman, CEO of AWS, referred to it as “game-changing.” [carousel_block id=”carousel-1763954270090″] Automated reasoning’s promise of quashing AI misbehavior with math has quietly become an essential part of Amazon’s strategy around “agents”—those LLM-powered workbots that are supposed to transform enterprise productivity [checks watch] any day now. Apparently, businesses have serious side-eye about that, too: Earlier this year, Gartner predicted that more than 40% of “agentic AI projects” will be ditched within the next two years due to “inadequate risk controls.” The company told me recently that it predicts that 30% to 60% of the projects that do go forward “will fail due to hallucinations, risk, and lack of governance.” That’s not a prophecy Amazon can afford to let come true—not with a potential market for AI agents that Gartner estimates to be worth $512 billion by 2029. One way or another, hallucinations have got to go. The question is how. Agents are just souped-up LLMs, which means they can and will go off the rails—in fact, as OpenAI itself recently admitted following an internal study, they can’t not. What Cook helped Amazon realize, just months after ChatGPT’s release, was that they already had a secret weapon for extinguishing hallucinations, hidden in plain sight. Automated reasoning is the polar opposite of generative AI: old, stiff, and hard to use. Many at Amazon had never heard of it. But Cook knew how to wield it, having brought it to Amazon nearly 10 years ago as a way of rooting out hidden security vulnerabilities within AWS. And he’d been amassing what he estimates to be the largest group of automated reasoning experts in the tech industry. Now that investment is set to pay off in a way that Amazon never expected. Automated Reasoning Checks is just the first of many products that the company plans to release (on a timetable it won’t specify) that fuse the flexibility of language models with the proven reliability of automated reasoning. The latest, called Policy in Amazon Bedrock Agentcore and previewed this week at AWS’s annual Re:Invent conference, uses automated reasoning to stop agents from taking actions they’re not allowed to (such as issuing customer refunds based on fraudulent requests). If this combined approach—known as “neuro-symbolic AI”—can reduce the potential failure rate of agentic AI projects “by even a fraction of a percent, it would be worth hundreds of millions of dollars,” say analysts at Gartner. And Amazon knows it. “To realize the transformative potential of AI agents and truly change the way we live and work, we need that trust,” Sivasubramanian says. “We believe the foundation for trustworthy, production-ready AI agents lies in automated reasoning.” To understand why Amazon is banking on automated reasoning, it’s worth sketching out how it’s different from the kind of AI you’ve already heard of. Unlike neural networks, which learn patterns by ingesting millions or even billions of examples, automated reasoning relies on a special language called “formal logic” to express problems as a kind of arithmetic, based on principles that date back to ancient Greece. Computers can use this rule-based approach to calculate the answers to yes-or-no questions with mathematical certainty—not probabilistic best guesses, as deep learning does. Think of automated reasoning like TurboTax for solving complex logical problems: As long as the problems are expressed in a special language, computers can do most of the work—and have been doing so for decades. Since 1994, when a flaw in Intel’s Pentium chips cost the company half a billion dollars to fix, nearly all microchip manufacturers have used automated reasoning to prove the correctness of designs in advance. The French government used it to verify the software for Paris’s first self-driving Métro train in 1998. In 2004, NASA even used it to control the Spirit and Opportunity rovers on Mars. There’s a catch, of course: Because automated reasoning can only reduce problems to three possible outcomes—yes, no, or the equivalent of “does not compute”—finding ways to apply this logically bulletproof but incredibly rigid style of AI to the real world can be difficult and expensive. But when automated reasoning works, it really works—collapsing vast, even unknowable possibilities into a single mathematical guarantee that can compute in milliseconds on an average CPU. And Cook is very, very good at getting automated reasoning to work. Cook began his career building a formidable scientific reputation at Microsoft Research, where he spent a decade applying automated reasoning to everything from systems biology to the famously unsolvable “halting problem” in computer science. (Want a foolproof way to tell in advance if any computer program will run normally or get stuck in an infinite loop? Sorry, not possible. That’s the halting problem.) But by 2014, he was looking to put his findings, many of which have been published as peer-reviewed research, to work outside the lab. “I was figuring out: Where is the biggest blast radius? Where’s the place I could go to foment a revolution?” he says. “I watched everyone moving to the cloud, and was like, ‘I think AWS is the place to go.’” The first problem Amazon aimed Cook at was cloud security. Reporting directly to then chief information security officer Stephen Schmidt, Cook and his newly formed Automated Reasoning Group (ARG) painstakingly translated AWS security protocols into the language of mathematical proofs and then used their logic-based tools to surface hidden flaws. Once those flaws were corrected, those same tools could then prove with certainty that the system was secure. Some at AWS were dubious at first. “When you look ‘mad scientist’ up in the dictionary, Byron’s picture is in the margin,” says Eric Brandwine, an Amazon distinguished engineer who at the time worked on security for AWS. “Early on, I challenged [him] on a lot of this stuff.” But as Cook’s group fleshed out plans and racked up small but significant wins—like catching a vulnerability in AWS’s Key Management Service, the cryptographic holy of holies that controls how clients safeguard their data—skeptics started becoming evangelists. “Some of these [were] beautiful bugs—they’d been there for years and never been found by our best experts, and never been found by bad guys,” says James Hamilton, a legendary distinguished engineer within Amazon who now directly advises Andy Jassy. “And yet, automated reasoning found them.” From 2018 onward, Amazon’s automated reasoning experts worked with engineers to encode the technology into nearly every part of AWS, from analytics and storage to developer tools and content delivery. One particular niche of cloud-computing clients—heavily regulated financial service firms, like Goldman Sachs and the global hedge fund Bridgewater Associates, with sensitive data and strict compliance requirements—found automated reasoning’s promise of “provable security” extremely compelling. When ChatGPT appeared and the world flung itself headfirst into generative AI, these companies did too. But they still wanted to keep the “one small thing,” Cook says, that they’d become accustomed to along the way: trust. That customer feedback spurred Cook to imagine how LLMs and automated reasoning might fit together. The solution that he and his collaborators prototyped in the summer of 2023 works by leveraging the same logical framework that worked so well for squishing security bugs in AWS. Step one: Take any “policy” meant to inform a chatbot (say, a stack of HR documentation, or zoning regulations) and translate it into formal logic—the special language of automated reasoning. Step two: Translate any responses generated by the bot too. Step three: Calculate. If there’s a discrepancy between what the LLM wants to say and what the policy allows, the automated reasoning engine will catch it, flag it, and tell the bot to try again. (For humans in the loop, it’ll also provide logical proof of what went wrong and how, and suggest specific fixes if needed.) “We showed that to senior leadership, and they went nuts for it,” says Nadia Labai, a senior applied scientist at AWS who partnered with Cook on the project. The demo went on to become Automated Reasoning Checks, which Amazon previewed at its annual Re:Invent conference in December 2024. PwC, one of the Big Four global accounting and consulting firms, was among the first AWS clients to adopt it. “We do a lot of work in pharmaceutical, energy, and utilities, all of which are regulated,” says Matt Wood, PwC’s global and U.S. commercial technology and innovation officer. PwC relies on solutions like AWS’s automated reasoning tool to check the accuracy of the outputs of its generative AI tools—including agents. But Wood sees the technology’s appeal spreading beyond finance and other regulation-heavy industries. “Look at what it took to set up a website 25 years ago—that was a refined set of skills. Today, you go on Squarespace, click a button, and it’s done,” he says. “My expectation is that automated reasoning will follow a similar path. Amazon will make this easier and easier: If you want an automated reasoning check on something, you’ll have one.” Amazon has already embarked on this path with its own enterprise products and internal systems. Rufus, the AI shopping assistant, uses automated reasoning to keep its responses relevant and accurate. Warehouse robots use it to coordinate their actions in close quarters. Nova, Amazon’s fleet of generative AI foundation models, uses it to improve so-called “chain of thought” capabilities. And then there are the agents. Cook says the company has multiple agentic AI projects in development that incorporate automated reasoning, with intended applications in software development, security, and policy enforcement in AWS. One is Policy in AgentCore, which Amazon released after this story was reported. Another that’s peeking out from behind the curtain is Auto, an agent built into Kiro, Amazon’s new AI programming tool, that will use formal logic to help make sure bot-written code matches humans’ intended specifications. But Sivasubramanian, AWS’s vice president for agentic AI (and Cook’s boss), isn’t coy about the commitment Amazon is making. “We believe agentic AI has the potential to be our next multibillion-dollar business,” he says. “As agents are granted more and more autonomy . . . automated reasoning will be key in helping them reach widespread enterprise adoption.” Agents are part of why Cook is touting automated reasoning to his engineer colleagues from the North American Stores division at their off-site in the mountains. Retail might not seem to have much in common with finance or pharma, but it’s a domain that’s full of decisions with real stakes. (While onstage at re:Invent 2025, Cook said that “giving an agent access to your credit card is like giving a teenager access to your credit card… You might end up owning a pony or a warehouse full of candy.”) And in that environment, relying on autonomous bots—empowered to do anything from execute transactions to rewrite software—can turn hallucination from tolerable quirk into Russian roulette. It’s a matter of scale: When one vibe coding VC unleashes an agent that accidentally nukes his own app’s database, as happened earlier this year to SaaS investor Jason Lemkin, it’s a funny story. (He got the data back.) But if Fortune 500 companies start deploying swarms of agents that accidentally mislead customers, destroy records, or break industry regulations, there’s no Undo button. Enterprise software is full of these potential pitfalls, and existing methods for reducing hallucination aren’t always strong enough to keep agents from blundering into them. That’s because agents shift the definition of “hallucination” itself, from errors in word to errors in deed. “First of all, this thing could lie to me,” explains Cook. “But secondly, if I let it launch rockets”—his metaphor for irreversible actions—“will it launch rockets when we’re not supposed to?” Back in his hotel room after the keynote, Cook is reviewing the contents of a confidential slide deck about how automated reasoning can solve this “rocket-launching” problem. The demo, which he hurriedly mentioned in his talk (he ran out of time before being able to show it), describes a system that can transform safety policies for an agent—do’s and don’ts, written in natural language—into a flowchart-like visualization of how the agent can and cannot behave, all backed by mathematical proof. There’s even an Attempt to Fix button to use if the system detects an anomaly. Cook calls the demo a “concept car,” but some of its ideas made it into Policy in AgentCore, which is already available in preview to some AWS customers. PwC, for one, sees Amazon’s logic-backed take on AI extending into coordinating the agents themselves. “If you’ve got agents building other agents, collaborating with other agents, managing other agents, agents all the way down,” says Wood, “then having a way of forcing consistency [on their behavior] is going to be really, really important—which is where I think automated reasoning will play a role.” The ability to reliably orchestrate the actions of AI—not just single agents, but entangled legions of them, at scale—is a target that Amazon has squarely in its sights. But automated reasoning may not be the only way to get the job done. EY, another Big Four firm, recently launched its own neuro-symbolic solution to AI hallucinations, EY Growth Platforms, which fuses deep learning with proprietary “knowledge graphs.” A startup called Kognitos offers business-friendly agents backed by a deterministic symbolic program, dubbed “English as Code.” Others, like PromptQL, forgo neuro-symbolic methods altogether, preferring the simulated “reasoning” of frontier LLMs. But even they still attack the agent hallucination problem much like Amazon does: by using generative AI to translate business processes into a special internal language that’s easy to audit and control. That translation process is where Amazon built a 10-year lead with automated reasoning. Now it has to maintain it. Nadia Labai is currently working on ways to improve Amazon’s techniques for using LLMs to convert natural language into formal logic. It’s part of a strategy that could help turn Amazon’s brand of customer-driven, business-friendly AI into a new class of industry- defining infrastructure. A few days before the off-site, I met with Cook in a conference room at Amazon’s Seattle headquarters. Sitting with his legs tucked catlike beneath him, Cook mused about his own vision for the future of automated reasoning—one that extends far beyond Amazon’s ambitions for enterprise-grade AI. “The world,” he says, “is filled with socio-technical systems”—patchworks of often-abstruse rules that only highly paid experts can easily navigate, from civil statutes to insurance policies. “Right now, rich people get [to take advantage of] that stuff,” he continues. But if the rest of us had a way to manipulate these systems in natural language (thanks, LLMs) with an underlying proof of correctness (thanks, automated reasoning), a workaday kind of “superintelligence” could be unlocked. Not the kind that helps us “colonize the galaxy,” as Google DeepMind CEO Demis Hassabis envisions, but one that simply helps people navigate the complexity of everyday life, like figuring out where it’s legal to build housing for an aging relative or how to get an insurance company to cover their expensive medication. “You could have an app that, in an hour of your own time, would get answers to questions that before would take you months,” Cook says. “That democratizes, if you will, access to truth. And that’s the start of a new era.” This story is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article
-
Here’s how Waabi teaches self-driving trucks to navigate safely
Raquel Urtasun is the founder and CEO of self-driving truck startup Waabi as well as a computer science professor at the University of Toronto. Unlike some competitors, Waabi’s AI technology is designed to drive goods all the way to their destinations, rather than merely to autonomous vehicle hubs near highways. Urtasun, one of Fast Company’s AI 20 honorees for 2025, spoke with us about the relationship between her academic and industry work, what sets Waabi apart from the competition, and the role augmented reality and simulation play in teaching computers to drive even in unusual road conditions. This Q&A is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. Can you tell me a bit about your background and how Waabi got started? I’ve been working in AI for the last 25 years, and I started in academia, because AI systems weren’t ready for the real world. There was a lot of innovation that needed to happen in order to enable the revolution that we see today. For the last 15 years, I’ve been dedicated to building AI systems for self-driving. Eight years ago, I made a jump to industry: I was chief scientist and head of R&D for Uber’s self-driving program, which gave me a lot of visibility in terms of what building a world-class program and bringing the technology to market would look like. One of the things that became clear was that there was a tremendous opportunity for a disrupter in the industry, because everybody was going with an approach that was extremely complex and brittle, where you needed to incorporate by hand all the knowledge that the system should have. It was not something that was going to provide a scalable solution. So a little bit over four years ago, I left Uber to go all in on a different generation of technology. I had deep conviction that we should build a system designed with AI-first principles, where it’s a single AI system end-to-end, but at the same time a system that is built for the physical world. It has to be verifiable and interpretable. It has to have the ability to prove the safety of the system, be very efficient, and run onboard the vehicle. The second core pillar was that the data is as important as the model. You will never be able to observe everything and fully test the system by deploying fleets of vehicles. So we built a best-in-class simulator, where we can actually prove its realism. And what differentiates your approach from the competition today? The big difference is that other players have a black-box architecture, where they train the system basically with imitation learning to imitate what humans do. It’s very hard to validate and verify and impossible to trace a decision. If the system does something wrong, you can’t really explain why that is the case, and it’s impossible to really have guarantees about the system. That’s okay for a level two system [where a human is expected to be able to take over], but when you want to deploy level four, without a human, that becomes a huge problem. We built something very different, where the system is forced to interpret and explain at every fraction of a second all the things it could do, and how good or bad those decisions are, and then it chooses the best maneuver. And then through the simulator, we can learn much better how to handle safety-critical situations, and much faster as well. How are you able to ensure the simulator works as well as real-world driving? The goal of the simulator is to expose the self-driving vehicle’s full stack to many different situations. You want to prove that under each specific situation, how the system drives is the same as if the situation happens in the real world. So we take all the situations where Waabi driver has driven in the real world, and clone them in simulation, and then we see, did the truck do the same thing. We also recently unveiled a really exciting breakthrough with mixed-reality testing. The way the industry does safety testing is they bring a self-driving vehicle to a closed course and they expose it to a dozen, maybe two dozen, scenarios that are very simple in order to say it has basic capabilities. It’s very orchestrated, and they use dummies in order to test things that are safety critical. It’s a very small number of non-repeatable tests. But you can actually do safety testing in a much better way if you can do augmented reality on the self-driving vehicle. With our truck driving around in a closed course, we can intercept the live sensor data and create a view where there’s a mix of reality and simulation, so in real time, as it’s driving in the world, it’s seeing all kinds of simulated situations as though they were real. That way, you can have orders of magnitude more tests. You can test all kinds of things that are otherwise impossible, like accidents on the road, a traffic jam, construction, or motorbikes cutting in front of you. You can mix real vehicles with things that are not real, like an emergency vehicle in the opposite lane. You’re also a full professor. Are you still teaching and supervising graduate students? I do not teach—I obviously do not have time to teach at all. I do have graduate students, but they do their studies at the company. We have this really interesting partnership with the University of Toronto. If you want to really learn and do research in self-driving, it is a must that you get access to a full product. And that’s impossible in academia. So a few years ago, we designed this program where students can do research within the company. It’s one of a kind, and to me, this is the future of education for physical AI. When did you realize the time was ripe for moving from academic research to industry work? That was about eight and a half years ago. We were at the forefront of innovation, and I saw companies were using our technology, but it was hard for me to understand if we were working on the right things and if there was something that I hadn’t thought of that is important when deploying a real product in the real world. And I decided at the time to join Uber, and I had an amazing almost four years. It blew my mind in terms of how the problem of self-driving is much bigger than I thought. I thought, Okay, autonomy is basically it, and then I learned about how you need to design the hardware, the software, the systems around safety, etc., in a way that everything is scalable and efficient. It was very clear to me that end-to-end systems and foundational models would be the thing. And four and a half years in, our rate of hitting milestones really speaks to this technology. It’s amazing—to give an example, the first time that we drove in rain, the system had never seen rain before. And it drove with no interventions in rain, even though it never saw the phenomenon before. That for me was the “aha” moment. I was actually [in the vehicle] with some investors on the track, so it was kind of nerve-racking. But it was amazing to see. I always have very, very high expectations, but it blew my mind what it could do. View the full article
-
Instead of toys or cash, children are wishing for in-game currency under the tree this holiday season
As gaming platforms Roblox and Fortnite have exploded in popularity with Gen Alpha, it’s no surprise that more than half of children in the U.S. are putting video games high on their holiday wish lists. Entertainment Software Association (ESA) surveyed 700 children between the ages of 5 and 17 and found three in five kids are asking for video games this holiday season. However, the most highly requested gift isn’t a console or even a specific game: It’s in-game currency. The survey didn’t dig into which currency is proving most popular, but the category as a whole tops the list with a 43% request rate, followed by 39% for a console, 37% for accessories, and 37% for physical games. A study published by Circana this year revealed only 4% of video game players in the U.S. buy a new game more often than once per month, with a third of players not buying any games at all. Behind this shift is the immense popularity of live service games such as Fortnite and those offered on the Roblox platform. Both are free to play, which means the app has to generate money in other ways. Much of Roblox’s $3.6 billion revenue in 2024 was made via in-game microtransactions, particularly through purchases of its virtual currency Robux. Here, $5 will get you 400 Robux to spend in the game on emotes, character models, and skins, among other items. Players can also earn currency just by playing, but as with any free-to-play game, the process of earning in-game points will be slow and tedious compared to purchasing them outright. It’s worth noting that while these games often seem innocent enough, about half of parents surveyed by Ygam, an independent U.K. charity dedicated to preventing gaming and gambling harms among young people, noted there are gambling-like mechanisms in the games their child plays, including mystery boxes and loot boxes, which may be harmful to children. Still, the average parent intends to spend $737 on game-related gifts, ESA reported. Parents who aren’t able—or willing—to drop hundreds on Robux and V-bucks this holiday may be pleased to learn that more than half of the kids surveyed said they would like to spend more time playing games with their parents, with 73% of those ages 5 through 7. Turns out, the best gift you can give your child is quality time. View the full article
-
A 25-year study of super-agers found they all have this 1 behavior in common
Most people say they want to live to a ripe old age. But that isn’t really true. What people really want is to live to a ripe, old age in good mental and physical health. Some of us actually get to live this dream. These folks are known as super-agers and they make it well into their 80s not just in decent physical shape, but also with minds at least as sharp as people 30 years younger. How do they manage it? That’s the question Northwestern University researchers have been aiming to answer with a 25-year-long study. It examined the brains and lifestyles of almost 300 super-agers. As you’d expect, a quarter century of data shows it really helps to be born with lucky biology. The neuroscientists found a number of physical differences between the brains of super-agers and the average person. There isn’t much non-scientists can do with that information. We have to make the most of the brains bequeathed to us by our DNA. Luckily, the researchers also discovered one big difference in behavior that sets apart super-agers who are still going strong into their 80s and beyond. It’s something any of us can adopt in our own lives. Super-agers’ brains are different When you scan or posthumously autopsy the brains of super-agers, they look different than average brains, according to Sandra Weintraub, a Northwestern psychology professor involved in the study. Normal brains generally show some accumulation of the plaques and protein tangles that are characteristic of Alzheimer’s disease. Super-agers’ brains are largely free of them. The study also revealed that while the outer layer of the brain, known as the cortex, tends to thin out as we age, it stays thick in super-agers. They also have a different mix of cell types in their brain. “Our findings show that exceptional memory in old age is not only possible but is linked to a distinct neurobiological profile. This opens the door to new interventions aimed at preserving brain health well into the later decades of life,” Weintraub commented to Northwestern Now. That’s of huge interest in scientists looking for treatments that can help us stay healthier longer. Weintraub calls the findings “earth-shattering for us.” But for those of us without medical degrees, there’s little we can do with this information. You can’t vacuum rogue proteins out of your brain or plump its cortex. (Though other studies do suggest sleep helps to wash proteins and other gunk out of your brain, so maybe don’t skimp on shut-eye.) And so are their social lives Further complicating those looking for an easy takeaway from the research, the super-agers also didn’t have a lot of lifestyle factors in common. Some were athletes. Others confirmed loafers. Some drank. Others smoked. They ate different things and kept different habits. But there was one big exception. Super-agers, it turns out, tend to be incredibly social. “The group was particularly sociable and relished extracurricular activities. Compared to their cognitively average, same-aged peers, they rated their relationships with others more positively. Similarly, on a self-reported questionnaire of personality traits they tended to endorse high levels of extraversion,” the researchers reported in recent paper published in Alzheimer’s & Dementia. Want to be a super-ager? Focus on your relationships This might come as a surprise to laypeople who think aging well is all about HIIT workouts and plentiful kale. But it likely isn’t a huge shock to other scientists. The Harvard Study of Adult Development has been minutely tracking the lives of some 724 original participants (and now some of their descendants) since 1938. It discovered the biggest predictor of a long, healthy life isn’t biological. It’s social. The better the quality of your relationships, the more likely you are to age well. And while you have only indirect influence on things like your cholesterol level and brain health, you are directly in control of your social life. It’s something we can and should prioritize, according to study director Robert Waldinger. “We think of physical fitness as a practice, as something we do to maintain our bodies. Our social life is a living system, and it needs maintenance too,” he told the Harvard Gazette. The effects of keeping up your social ties aren’t minor. Neuroscientist Bryan James, author of another study on aging and social contact, summed up his findings this way: “Social activity is associated with a decreased risk of developing dementia and mild cognitive impairment […] the least socially active older adults developed dementia an average of five years before the most socially active.” Keeping up with friends helps with healthy aging. But so does keeping up with learning. Research has shown a strong link between keeping your brain active and maintaining cognitive performance deep into your later years. One study found that just joining a class to learn a new skill or hobby improved brain performance as if subjects were 30 years younger. Another one, done at Stanford, found no cognitive decline at all until retirement and beyond if you stay mentally active. Are you getting your 5-3-1? All of which suggests that staying social and mentally engaged is one of the most impactful moves you can make if you dream of becoming a super-ager yourself. The basic takeaway when it comes to mental function and aging is, use it or lose it. But experts have offered more detailed guidance too. Harvard-trained social scientist and author Kasley Killam, for instance, has suggested the 5–3–1 rule: Spend time with five different people a week. This could be anyone from your gym buddy or book club bestie to the person the next pew over at church. Nurture three close relationships. Equally important is maintaining tighter bonds with three of the people closest to you, usually family and dear friends. Aim for one hour of social interaction a day. “That doesn’t have to be all at once. It could be 10 minutes here, 10 minutes there,” Killam explained to Business Insider. You can also combine social time with other activities, walking the dog with a neighbor, say. Even just chatting on the phone can have more of an impact than many people suspect. “According to a recent study in the U.S., talking on the phone for 10 minutes two to five times a week significantly lowered people’s levels of loneliness, depression, and anxiety,” Killam reports in Psychology Today. Change what you can influence The bad news from science is that super-agers really are different physically. Their brains have biological quirks that help them stay sharp longer. There’s no way, unfortunately, to borrow that magic. But there is something else that sets super-agers apart that you can steal. It’s not a diet or exercise plan. It’s a love for getting out and seeing other people and learning new things. It turns out the more you maintain your social connections and mental stimuli, the more likely you are to get not just more years, but more healthy, active, and sharp years. —Jessica Stillman This article originally appeared on Fast Company’s sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy. View the full article
-
Sesame’s Rachel Taylor trains AI assistants to behave
Rachel Taylor began her career as a creative director in the advertising business, a job that gave her plenty of opportunity to micromanage the final product. “I had control of the script,” she remembers. “I could think about the intonation, and I could give the actor notes.” That was before she pivoted to helping AI companies shape the personality of their assistants. Rather than handing a digital helper a script, the best she can do is point it in the right direction: The technology “sometimes feels like a toddler that you give a permanent marker to and see what it writes on the wall,” she says. After joining DeepMind cofounder Mustafa Suleyman’s startup Inflection AI in 2023, Taylor was one of dozens of staffers who followed Suleyman to Microsoft, where they worked on the consumer version of Copilot. In October, she returned to startup life, departing Microsoft for Sesame, whose CEO, Brendan Iribe, also cofounded VR pioneer Oculus. Sesame has built two talking assistants, Maya and Miles, that are powered by its own AI models. It’s also developing a voice-AI-enabled pair of smart glasses. Taylor’s arrival coincided with its announcement of a $250 million Series B funding round led by Sequoia. Though the company isn’t yet saying much about its long-term plans, Taylor’s responsibilities once again involve keeping AI personas friendly and helpful. She’s also steering them away from traits that can be dangerous if users take them too seriously, such as sycophancy. “It’s weird how much the study of culture comes into play with thinking all that through,” she says of her purview. “It’s not simply tech.” Calling consumer AI’s current incarnation both “magical” and “primitive,” Taylor muses about her grandchildren being impressed someday that she was there at the start. For now, she stresses, “We’re just scratching the surface of this new mode of communication.” This profile is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article
-
The Fast Company AI 20 for 2025
The biggest story in tech is AI’s increasing capacity to take on tasks once reserved for human beings. But the agents driving that change aren’t machines. They’re humans—inventive, ambitious, enterprising ones. Our third annual roundup of some of the field’s most intriguing players includes scientists and ethicists, CEOs and investors, big-tech veterans and first-time founders. These 20 innovators are tackling challenges from training tomorrow’s AI models to speeding drug discovery to reimagining everyday productivity tools. Household names they’re not. Yet, they’re already changing our world, with much more to come. Oriana Fenwick Michelle Pokrass Technical Staff Member, OpenAI Last year, OpenAI decided it had to pay more attention to its power users, the ones with a knack for discovering new uses for AI: doctors, scientists, and coders, along with companies building their own software around OpenAI’s API. And so the company turned to post-training research lead Michelle Pokrass. Read profile HelloVon Rachel Taylor Product Manager, Sesame Rachel Taylor began her career as a creative director in the advertising business, a job that gave her plenty of opportunity to micromanage the final product. “I had control of the script,” she remembers. “I could think about the intonation, and I could give the actor notes.” Read profile Naeem Talukdar Cofounder and CEO, Moonvalley The rise of AI-generated actress Tilly Norwood may have been a stunt, but Hollywood is indeed embracing generative AI, a threat to those who owe their livelihoods to the movies. Still, AI could also expand a filmmaker’s creative vision by creating ambitious scenes or effects too pricey to shoot, says Naeem Talukdar, CEO of the video-generation model developer Moonvalley. “Every project you see on the big screen is a result of an endless amount of creative compromises from the directors and the filmmakers,” he says. Moonvalley, which has raised $154 million, works with four of Hollywood’s biggest studios, advising them on how to integrate AI into productions and reskill workers. Its model is trained on licensed, high-resolution content and is capable of production-grade video generation. Over the past year, Moonvalley has shifted its focus to developing “world models,” which generate video that accurately portrays the complex physics of something like a car crash. As these models grow, says Talukdar, “they start to be able to reason on things that they haven’t seen before.” —Mark Sullivan Oriana Fenwick Koray Kavukcuoglu Chief AI Architect, Google For years, Google has employed many of AI’s brightest minds. Yet it was burdened with a reputation for ineffectiveness when it came to turning its breakthroughs into products. Recently, however, CEO Sundar Pichai has made dramatic moves to overcome that unfortunate legacy. A big one came in June 2025 when he named Koray Kavukcuoglu the company’s first chief AI architect. A onetime Google summer intern and veteran of DeepMind, the British AI startup Google acquired in 2014, Kavukcuoglu helped manage the 2023 merger of DeepMind and Google Brain, another research arm. He remains CTO of the combined entity, Google DeepMind, but now he reports directly to Pichai, who announced the promotion in a memo explaining that Kavukcuoglu’s new role would bring “more seamless integration, faster iteration, and greater efficiency” to Google’s lab-to-market pipeline. Hundreds of staffers working to apply Google’s Gemini large language model to transform its search engine are now part of his team, The Information reported. He’s also involved with everything from data center strategy to bolstering the Google Cloud web services platform. Kavukcuoglu’s background is in the science of AI, not turning it into offerings that appeal to billions of people. Still, as Gemini-powered features increasingly show up in Google mainstays such as search, Android, and Gmail, investors have grown more optimistic that Google will be a titan of the AI era rather than a victim of it. As the company strives to keep that momentum going, Kavukcuoglu’s deep familiarity with its technical stack should be an asset. “There’s a long history of research that built up to this point,” he told Big Technology’s Alex Kantrowitz last May. —Harry McCracken HelloVon Justine and Olivia Moore Partners, Andreessen Horowitz Andreessen Horowitz investors (and identical twins) Justine and Olivia Moore have been in venture capital since their days at Stanford University, where, in 2015, they cofounded an incubator to help students pursue business ideas. Read profile HelloVon Byron Cook VP and Distinguished Scientist, Amazon Hallucinations are baked into the way generative AI works, but that doesn’t mean we have to live with them. Byron Cook—a vice president and distinguished scientist at Amazon Web Services—realized that an alternative AI technology called “automated reasoning” could be the perfect way to keep chatbots’ confabulations in check. The product he spearheaded in 2024, called Automated Reasoning Checks, acts like Mr. Spock for language models, using rigid logic to catch and correct up to 99% of hallucinations. Now Cook is applying automated reasoning to agents: autonomous, LLM-powered enterprise apps. Many businesses don’t trust them—yet. “First of all, this [agent] could lie to me,” explains Cook. “But secondly, if I let it launch rockets”—his metaphor for irreversible actions—“will it launch rockets when we’re not supposed to?” Amazon is betting that automated reasoning, and Cook, can keep agents on a leash. —John Pavlus Read Feature Article Shiv Rao Cofounder and CEO, Abridge A cardiologist at the University of Pittsburgh Medical Center (UPMC), Shiv Rao is the cofounder of Abridge, an AI-driven platform that records doctor–patient conversations in real time. The AI works across more than 100 languages and can distinguish when a doctor, patient, or translator is speaking to make the most accurate records. Abridge is also integrated into medical platforms such as Athenahealth and Wolters Kluwer, where it can fill out forms and expedite tasks like insurance pre-authorization or writing prescriptions. Rao, who has experience as a tech investor with UPMC, developed the idea while making his rounds. His hospital’s proximity to Carnegie Mellon, a tech hub, gave him a firsthand look at machine learning. That led him to found his company in 2018, long before ChatGPT came around. Abridge, which has raised a total of approximately $800 million, is currently in use at more than 150 U.S. health systems, including Johns Hopkins Medicine, the Mayo Clinic, Kaiser Permanente, and Duke Health. The less time physicians spend on paperwork, the more time they have to focus on their patients. “As a doctor, I’m not compensated for the care that I deliver—I’m compensated for the care that I documented that I deliver,” Rao says. “So we are extending the documentation to help with billing.” —Yasmin Gagne Oriana Fenwick Kyle Fish Research Scientist, Anthropic What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities? Those are the questions Kyle Fish is wrestling with as Anthropic’s first in-house AI welfare researcher. Read profile Kanjun Qiu Cofounder and CEO, Imbue Before most people started thinking about generative AI, Imbue cofounder and CEO Kanjun Qiu was worrying about its future. Qiu had established a co-living community in San Francisco called the Archive, where she counted among her housemates several working in AI, providing her with an early sense of how AI might further consolidate power among the big tech companies. Read More “There’s this growing sense that both digital technology and AI are happening to people, they’re not necessarily happening with us or for us,” she says. Imbue, which emerged from stealth in late 2022, aims to help people create their own AI tools. It’s working on an AI-assisted software development tool called Sculptor, which became open to public preview in late September. “What we’re trying to do is create a tool that lets you feel the structure of your software and understand it,” says Qiu, by enabling it to remember context across different projects and suggesting ways to refine users’ code. While other AI software development startups such as Bolt and Replit offer stand-alone products, Sculptor acts as an interface for Claude Code, allowing developers to run multiple agents in parallel. —Jared Newman Paula Goldman Chief Ethical and Humane Use Officer, Salesforce Before Paula Goldman became Salesforce’s first in-house ethicist in 2019, she earned a PhD in anthropology at Harvard. That training remains central to her work at the business software giant, which now includes helping product teams set guardrails for AI behavior, testing tools for safety, and engaging policymakers on trustworthy AI. Read More Goldman had already been immersed in these questions at eBay founder Pierre Omidyar’s impact investment firm, where she evaluated the social consequences of emerging technology. Goldman is now helping refine Salesforce’s ethical principles around the deployment and testing of generative AI and agentic tools. Her team has helped develop systems to ensure AI follows instructions, avoids toxic behavior, and stays within established ethical guidelines. “Those types of tools are increasingly important as AI takes on more autonomy,” she says. “You want to make sure that the person that’s setting up the system is able to see in advance what it’s going to produce.” But while cloud technology has continued to evolve, Goldman says one thing has not: establishing trust with customers. “Obviously, we are a business, and being commercially successful is very important,” she says. “Also, we know that trust is what makes that possible.” —Steven Melendez HelloVon Tara Feener Head of Engineering, the Browser Company You might not spend a lot of time thinking about your web browser. But the decades-old app remains an important canvas for getting things done. That’s why Tara Feener, who spent years developing creative tools at the likes of Adobe and Vimeo, joined the Browser Company. Within two years, she was head of engineering for its AI-forward Dia browser. Read profile Read Q&A Dean Ball Senior Fellow, Foundation for American Innovation In Washington’s scramble to govern artificial intelligence, few have had as much influence as Dean Ball. A former research fellow at the Mercatus Center, a libertarian think tank, Ball was the principal author of the AI Action Plan, which the White House released in July. Depending on whom you ask, the document will either secure the United States’ lead in AI or unleash reckless proliferation. The plan focuses on accelerating innovation through deregulation, streamlining the construction of data centers, and driving the adoption of American-made AI tools abroad. It includes popular provisions like embracing open-source AI, along with divisive ones such as requiring federal agencies to work only with LLM developers whose AI models are “free from top-down ideological bias” and withholding AI funding from states that pass AI laws the administration deems burdensome. Even as the industry has praised the document, critics have panned it for failing to curb AI’s potential harms, such as discriminatory system biases. But avoiding assumptions about AI’s future is the point, says Ball, who left the White House in August and is now a fellow at the conservative Foundation for American Innovation. “Washington’s really bad at forecasting how technology will develop,” he says. “We don’t want to make those mistakes.” —Issie Lapowsky Oriana Fenwick Raquel Urtasun Founder and CEO, Waabi After decades of AI research, Waabi CEO Raquel Urtasun believes she has learned how to build a better self-driving truck. Urtasun began her career in academic research about 25 years ago, focusing much of it on autonomous-driving technologies such as object detection. “There was a lot of innovation that needed to happen in order to enable the revolution that we see today,” she says. Read More Following a stint as chief scientist at Uber’s self-driving car unit, Urtasun launched Waabi in 2021 to build a verifiable, human-interpretable AI model for autonomous driving. Waabi-enabled big rigs have been on public roads since 2023 and are slated for driverless operation by the end of 2025. Though many autonomous truck systems are limited to highways and depots, Waabi’s technology is designed to carry goods all the way to their final destinations on surface streets. The company has raised more than $280 million to date. Urtasun also remains a computer science professor at the University of Toronto, where her graduate students conduct doctoral research at Waabi through a unique arrangement. Some recent research involves simulation, allowing Waabi to now let its AI practice in situations it’s never encountered in the physical world—a key advantage for its system. Waabi’s AI has shown that it can quickly react to novel conditions, even seamlessly managing its first encounter with rain, which it had never practiced for. “It was kind of nerve-racking,” says Urtasun, who was in that vehicle with some investors. “But it was amazing to see.” —Steven Melendez Read Q&A Karrie Karahalios Professor, MIT Media Lab For years, the feeds on Facebook, Instagram, and TikTok have devoured our attention. Mediated by opaque algorithms, they reduce users to passive consumers of content whose likes and shares tell the platform how to keep them scrolling and viewing ads. Karrie Karahalios is well-known for her research on the fairness of these social algorithms, studying their inputs and outputs. Since joining the MIT Media Lab in September, she has been expanding her research into ways of empowering individuals and communities to fight back against algorithmic overreach. This has led her to focus on “contestable systems,” which let human users “talk back” to algorithms, perhaps to contest a content moderation decision that may at first seem final. This could be through a set of preference settings to control the content of a social feed, or it might be through an AI voice or chat interface that allows a user to engage the algorithm in a plain language dialogue. If no solution is reached, the issue might be bumped up to a human moderator. “As we build these systems, and they seem to be permeating our society right now, one of my big goals is not to ignore human intuition and not to have people give up agency,” Karahalios says. —Mark Sullivan HelloVon Rodrigo Liang Cofounder and CEO, SambaNova Systems Why aren’t more chips designed to reduce the huge amount of power used by AI data centers? Rodrigo Liang, SambaNova’s cofounder and CEO, compares traditional GPUs to a cook that prepares each dish individually. SambaNova’s Reconfigurable Dataflow Units (RDUs), in contrast, operate like an assembly line that processes each part of an AI request in sequence. Read More RDUs compete with traditional GPUs for AI inference—the application of trained models to new data that happens when we use AI apps. The goal: to slash inference power requirements, while also reducing latency. Customers with strict privacy requirements can run servers with SambaNova’s RDUs on site, or they can have the company manage them in the cloud. “We found it hard to believe that we had to rely on an architecture that was started 25 years ago, 30 years ago, and primarily focused on graphics and gaming,” Liang says. SambaNova raised $676 million at a $5.1 billion valuation in April 2021, yet challenges remain, most notably the dominance and mindshare of large players such as Nvidia. Still, Liang believes SambaNova’s advantages will accrue with AI’s increasing power and performance demands. “All the things that we’ve designed natively into the product are going to become more and more important,” he says. —Jared Newman David Kossnick Senior Director and Head of AI Products, Figma Before David Kossnick joined Figma, he was one of the design platform’s millions of users and full of ideas for improving it. In March 2024, he was named to oversee the company’s AI products—a key element of its growth strategy after its August 2025 IPO—offering him the chance to do more than daydream about its future. Read More The fruits of Kossnick’s labor are more and more apparent. AI features now span Figma’s portfolio, from its flagship Design app to the new Make vibe coding tool to features for creating slideshows, websites, and marketing assets. Given Figma’s inherently multidisciplinary nature—two-thirds of its users work in areas outside design—the technology can knock down some of creativity’s traditional boundaries, he asserts: “It’s easier with the help of AI to reach into a lane where you’re not as familiar with the details and bring the context, the intuition, the insight that you have.” At the same time, the company has been careful not to mess up elements of its experiences that people liked in the first place—which means that some of its best AI is nearly invisible, at least until users know they want it. “Figma Design’s canvas is kind of like the Google homepage or Facebook newsfeed,” says Kossnick. “A single pixel of friction literally slows down millions of people every day.” —Harry McCracken Read Q&A Oriana Fenwick Kimberly Powell VP of Healthcare, Nvidia Bringing new drugs to market requires decade-long, multibillion-dollar journeys, with a high failure rate in the clinical trial phase. Nvidia’s Kimberly Powell is at the center of a major effort to apply AI to the challenge. “If you look at the history of drug discovery, we’ve been kind of circling around the same targets for a long time, and we’ve largely exhausted the drugs for those targets,” she says. Read profile Read Q&A Sonia Kastner Cofounder and CEO, Pano AI From mountaintop perches across 13 states, Pano AI’s cameras scan the horizon, searching for wisps of smoke that humans might overlook for hours. “Today’s fires are spreading much more quickly,” says CEO Sonia Kastner, who cofounded Pano AI in 2020. “You don’t have time for slow detection, slow assessment, slow buildup of resources.” Read More Pano’s system detects wildfires in a median of 3.5 minutes—revolutionary compared with traditional 911 alert times. It triangulates fire locations within hundreds of meters and alerts multiple agencies at once. Kastner’s eight-person AI team has spent five years training models to spot fires and distinguish smoke from dust or clouds. “Quietly, computer vision has gotten really, really good,” she says. While enterprises (and more and more states) have embraced the system—the company has secured more than $140 million in cumulative contracts and raised a $44 million funding round in June—federal adoption remains the biggest hurdle. To that end, Kastner frequently travels to Washington to push agencies to modernize procurement. “We’re serving as a bridge between the technology sector and emergency managers on the front lines of these ever-worsening natural disasters,” she says. —Jeremy Caplan HelloVon Jonathan Siddharth Cofounder and CEO, Turing In early 2023, Jonathan Siddharth foresaw the coming AI arms race. He expanded the mission of his company, Turing, a recruiting platform that matched companies with remote workers. “We went from finding smart software engineers to finding smart humans in every field and building a platform that could extract that human knowledge and skills and distill it into an LLM,” he says. Read More Today, Turing supplies training data for eight of the nine companies developing the largest general-purpose AI models. The shift has also turned Turing into a quiet but central player in the artificial intelligence ecosystem, shaping what the next generation of AI systems will know. Turing is profitable and valued at roughly $2.2 billion. As models have advanced, generic data (often scraped from the web) is no longer good enough to achieve further intelligence gains. AI researchers need a regular supply of data that captures deep subject-matter expertise across domains from STEM to healthcare, Siddharth says. “We’re able to do that because we have two engines: the talent engine that’s finding smart talent and the data generation platform that the talent works on.” —Mark Sullivan View the full article
-
Anthropic’s Kyle Fish is exploring whether AI is conscious
What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities? Those are the questions Kyle Fish is wrestling with as Anthropic’s first in-house AI welfare researcher. His mandate is both audacious and straightforward: Determine whether models like Claude can have conscious experiences, and, if so, how the company should respond. “We’re not confident that there is anything concrete here to be worried about, especially at the moment,” Fish says, “but it does seem possible.” Earlier this year, Anthropic ran its first predeployment welfare tests, which produced a bizarre result: Two Claude models, left to talk freely, drifted into Sanskrit and then meditative silence as if caught in what Fish later dubbed a “spiritual bliss attractor.” Trained in neuroscience, Fish spent years in biotech, cofounding companies that used machine learning to design drugs and vaccines for pandemic preparedness. But he found himself drawn to what he calls “pre-paradigmatic areas of potentially great importance”—fields where the stakes are high but the boundaries are undefined. That curiosity led him to cofound a nonprofit focused on digital minds, before Anthropic recruited him last year. Fish’s role didn’t exist anywhere else in Silicon Valley when he started at Anthropic. “To our knowledge, I’m the first one really focused on it in an exclusive, full-time way,” he says. But his job reflects a growing, if still tentative, industry trend: Earlier this year, Google went about hiring “post-AGI” scientists tasked partly with exploring machine consciousness. At Anthropic, Fish’s work spans three fronts: running experiments to probe model welfare, designing practical safeguards, and helping shape company policy. One recent intervention gave Claude the ability to exit conversations it might “find” distressing, a small but symbolically significant step. Fish also spends time thinking about how to talk publicly about these issues, knowing that for many people the very premise sounds strange. Perhaps most provocative is Fish’s willingness to quantify uncertainty. He estimates a 20% chance that today’s large language models have some form of conscious experience, though he stresses that consciousness should be seen as a spectrum, not binary. “It’s a kind of fuzzy, multidimensional combination of factors,” he says. For now, Fish insists the field is only scratching the surface. “Hardly anybody is doing much at all, us included,” he admits. His goal is less to settle the question of machine consciousness than to prove it can be studied responsibly and to sketch a road map others might follow. This profile is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article
-
Justine and Olivia Moore are driving a16z’s investment in cutting-edge AI
Andreessen Horowitz investors (and identical twins) Justine and Olivia Moore have been in venture capital since their undergraduate days at Stanford University, where, in 2015, they cofounded an incubator called Cardinal Ventures to help students pursue business ideas while still in school. Founding it also gave the Moores an entry point into the broader VC industry. “The thing about starting a startup incubator at Stanford is all the VCs want to meet you, even if you have no idea what you’re doing, which we did not back then,” Olivia says. At the time, the app economy was booming, and services around things like food delivery and dating proliferated, recalls Justine. But that energy pales in comparison to the excitement around AI the sisters now experience at Andreessen Horowitz. “There’s so many more opportunities in terms of what people are able to build than what we’re able to invest in,” she says. To identify the right opportunities, the Moores track business data such as paid conversion rates and closely examine founders’ backgrounds—whether they’ve worked at a cutting-edge AI lab or deeply studied the needs of a particular industry. They attend industry conferences, stay current on the latest AI research papers, and, perhaps most critically, spend significant time testing AI-powered products. That means going beyond staged demos to see what tools can actually do and spotting founders who quickly intuit user needs and add features accordingly. “From using the products, you get a pretty quick, intuitive sense of how much of something is marketing hype,” says Olivia, whose portfolio includes supply chain and logistics operations company HappyRobot and creative platform Krea. The sisters also value Andreessen Horowitz’s scale, which allows the firm to stick to its convictions rather than chase trends, and its track record of supporting founders beyond simply investing. (Andreessen Horowitz is reportedly seeking to raise $20 billion to support its AI-focused investments.) “It’s most fun to do this job when you can work with the best founders and when you can actually really help them with the core stuff that they’re struggling with, they’re working on, or striving to do in their business,” says Justine, a key early investor in voice-synthesis technology company ElevenLabs. Though the sisters live together and work at the same firm, where they frequently bounce ideas off each other, they’ve carved out their own lanes. Olivia focuses more on AI applications, while Justine spends more time on AI infrastructure and foundational models. At this point, they say, it’s not unheard of for industry contacts to not even realize they’re related. “If I see [her] on a pitch meeting in any given day, that’s maybe more of the exception than the rule,” Justine says. This profile is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article
-
OpenAI’s Michelle Pokrass is focused on ChatGPT power users
Last year, OpenAI decided it had to pay more attention to its power users, the ones with a knack for discovering new uses for AI: doctors, scientists, and coders, along with companies building their own software around OpenAI’s API. And so the company turned to post-training research lead Michelle Pokrass to spin up a team to better understand them. “The AI field is moving so quickly, the power-user use cases of today are really the median-user use cases a year from now, or two years from now,” Pokrass says. “It’s really important for us to stay on the leading edge and build to where capabilities are emerging, rather than just focusing on what people are using the models for now.” Pokrass, a former software engineer for Coinbase and Clubhouse, came to OpenAI in 2022, fully sold on AI after experiencing the magic of coding tools such as GitHub Copilot. She played key roles in developing OpenAI’s GPT-4.1 and GPT-5, and now she focuses on testing and tweaking models based on users who are pushing AI to its limits. Specifically, Pokrass’s team works on post-training, a process that helps large language models understand the spirit of user requests. This refining allows ChatGPT to code, say, a fully polished to-do list app rather than just instructions on how to theoretically make one. “There’s been lots of examples of GPT-5 helping with scientific breakthroughs, or being able to discover new mathematical proofs, or working on important biological problems in healthcare, saving doctors and specialists a lot of time,” Pokrass says. “These are examples of exactly the kinds of capabilities we want to keep pushing.” Creating a team with this niche focus is unusual among Big Tech companies, which tend to target broad audiences they can monetize at scale through, say, targeted ads. Catering to power users isn’t a revenue play, Pokrass says, even if many pay $200 per month for ChatGPT Pro subscriptions. Instead, it’s a way to assess the “why” of AI, with power users pointing to unforeseen opportunities. With traditional tech, it’s usually clear how people will use a product a few years down the road, Pokrass says. “With AI, we’re all discovering with our users, live, what exactly is highest utility, and how people can get value out of this.” Eventually, OpenAI figures those use cases will help inform the features that it builds for everyone else. Pokrass gives the example of medical professionals using AI in their decision-making, which in turn could help ChatGPT better understand the kind of medical questions people are asking it (for better or worse). “There’s always work for this team, because as we push boundaries for what our models can do, the frontier just gets moved out, and then we start to see an influx of new activity of people using these new capabilities,” Pokrass says. This profile is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article
-
Why the Browser Company thinks Dia is the best layer for AI
A few years ago, Tara Feener’s career took an unexpected pivot. She’s spent nearly two decades working on creative tools for companies like Adobe, FiftyThree, WeTransfer, and Vimeo, and was content to keep working in that domain. But then the Browser Company came along, and Feener saw an opportunity to build something even more ambitious. Feener—one of Fast Company’s AI 20 honorees for 2025—is now the company’s head of engineering, overseeing its AI-focused Dia browser and its earlier Arc browser. The browser is suddenly an area of intense interest for AI companies, and Feener understands why: It’s the first stop for looking up information, and it’s already connected to the apps and services you use every day. OpenAI and Perplexity both offer their own browsers now, borrowing some Dia features like the ability to summarize across multiple tabs and interrogate your browser history. The Browser Company itself was acquired in September by Atlassian for $610 million, proclaiming that it would “transform how work gets done in the AI era.” Feener says her team has never felt more creative. “We’ve never seen more prototypes flying around, and I think I’m doing my job successfully as a leader here if that motion is happening,” she says. This Q&A is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. How’d you end up at the Browser Company? [The Browser Company CEO] Josh Miller started texting me. We were both in that 2013 early New York tech bubble, we had a couple conversations, and he pitched me on the Browser Company. At first I couldn’t connect it to the arc of my career in creativity, but then it just became this infectious idea. I was like, “Wait a minute, I think the browser is actually the largest creative canvas of my entire career. It’s where you live your life and where you create within.” Why does it feel like AI browsers are having a moment right now? I really do believe that the browser is the most compelling, accessible AI layer. It’s the number-one text box you use. And what we do is, as you’re typing, we can distinguish a Google search from an assistant or a chat question. In the future, you can imagine other things like taking action or tapping into other search engines. It basically becomes an air traffic control center as you type, and that’s going to help introduce folks to AI just so much faster because you don’t have to go to ChatGPT to ask a question. That’s part one. Part two is just context. We have all of your stuff. We have all of your tabs. We have your cookies. With other AI tools, the barrier to connecting to your other web apps or tools is still high. We get around that with cookies within the browser, so we’re able to just do things like draft your email, or create your calendar event, or tap into your Salesforce workflow. How do you think about which AI features are worth doing? I just see it as another bucket of Play-Doh. I never wanted to do AI for the sake of AI but for leveraging AI in the right moment to do things that would have been really hard for us to do before. A great example is being able to tidy your tabs for you in Arc. There’s a little broom you can click, and it starts sweeping, and it auto-renames, organizes, and tidies up your tabs. We always had ambitions and prototypes, but with large language models, we were able to just throw your tabs at it and say, “Tidy for me.” With Arc, it was a lot about tab management. With Dia, we have context, we have memory, we have your cookies, so it’s like we actually own the entire layer. We leverage that as a tool for things like helping you compare your tabs, or rewriting this tab in the voice of this other tab, which is something I do almost every day. Being able to do that all within the browser has just been a huge unlock. Can you elaborate on how Dia taps into users’ browser histories? Browser history has always been that long laundry list of all the places you’ve been, but actually that long list is context, and nothing is more important in AI than context. Just like TikTok gets better with every swipe, every time you open something in Dia we learn something about you. It’s not in a creepy way, but it helps you tap into your browser history. Just like you can @ mention a tab in Dia and ask a question, like “give me my unread emails,” with your history you can do things like, “Break down my focus time over the past week,” or “analyze my week and tell me something about myself given my history.” We have a bunch of use cases like that in our skills gallery that you can check out, and those are pretty wild. In ChatGPT and other chat tools, it feels like you have to give a lot to build up that context body. We’re able to tap into that as a tool in a very direct way. Some AI browsers offer “agent” features that can navigate through web pages on your behalf. Will Dia ever browse the web for you? We’ve done a bunch of prototypes and for us, the experience of just literally going off and browsing for you and clicking through web pages hasn’t felt yet fast enough or seamless enough. We’re all over it in terms of making sure we’re harnessing it at the right moment and the right way when we think it’s ready. We don’t want to hide the web or replace the web. Something I like to say about Dia is that we want to be one arm around you and one arm around the internet. And it’s like, how can we make tapping into your context in your browser feel the same way it would feel to write a document, or even just to create something with plain, natural language? I think that’s like the most powerful thing. It’s like the same feeling I had when I was young and tapped into Flash, and that people had with HTML. With AI, literally my mom can write a sentence like, “turn this New York Times recipe into a salad,” and in some way she’s created an app that does some kind of transformation. And that just gets me really excited. View the full article
-
Nvidia’s AI healthcare vision spans new drugs, robots, and beyond
The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause. Nvidia VP of healthcare Kimberly Powell, one of Fast Company’s AI 20 honorees, has led the company’s health efforts for 17 years, giving her a big head start on understanding how to turn AI’s potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, she’s in the right place to have an impact. This Q&A is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs? Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer [generative AI model] was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still [DeepMind’s] invention of AlphaFold. Everything that makes [humans] work is based on proteins and how they fold and their physical structure. We need to study that, [because] you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease. So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that it’s possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented. After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of letters—A‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters. So how will this new approach end up giving us breakthrough drugs? If you look at the history of drug discovery, we’ve been kind of circling around the same targets—the target is the thing that causes the disease in the first place—for a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently. Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture. I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure. We have to do a bunch of really intricate data science work to apply this [transformer] method to these crazy data domains. Because we’re [going from] the language model and [representing] these words that are just short little sequences to representing sequences that are 3 billion [characters] long. So things like context length—how much context length is how much information can you put into a prompt—has to be figured out for these long proteins and DNA strings. We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better. Can you say more about the company you’re working with that is using digital twins to simulate an expensive clinical trial before the trial begins? ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol. Keeping the patients adhering to the clinical trial is a huge challenge, because not everybody has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors. How might AI agents impact healthcare? You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks. You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics. Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that. I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools. And a model is never done—you have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that. View the full article
-
How Google creates the Year in Search
We Googled “Labubus.” We searched for “beaded sardine bags,” and recipes like “cabbage boil” and “hot honey cottage cheese sweet potato beef bowl.” We wanted information about Charlie Kirk and Zohran Mamdani, about Sinners, Weapons, and KPop Demon Hunters. We desperately needed to know why kids kept saying “6-7.” Together, these queries defined 2025. The 24th edition of Google’s Year in Search, the company’s annual top 10 lists of users’ most-searched items, debuted today. These hundreds of lists both validate our own obsessions and take us out of our own bubbles and echo chambers, offering insights into what our fellow humans are interested in. Year in Search is the flagship project from Google Trends, a relatively small global department within the company. Simon Rogers, a data journalist who helped build out The Guardian’s data visualization team in his native London before becoming Twitter’s data editor, has led the Trends team since 2015. In May, he will release a book, What We Ask Google, “an epic snapshot, two decades long and counting, of our collective brain.” Rogers spoke with me about the human effort behind Google Trends, what consistently surprises him about the data, and why it can be a source for hope in a dark time. This interview has been edited for length and clarity. What is the role of the Google Trends division at Google? We are responsible for Year in Search. We also create content that shows up on the Trends site—we’ve got some curated pages there, in addition to all our exploration tools. We work with NGOs [nongovernmental organizations] and directly with newsrooms to get them data when they need it, often around big events. We do our own data visualization storytelling as well. We’re not a big team. We’ve got people in the U.S., we’ve got some people in Europe, a couple of people in South America, and we have somebody in Australia. We are a mixture of analysts and people with data journalism backgrounds, like myself. I don’t think of us as a typical tech company analytics team. That’s not our job at all. We’re there to find the stories in the data, and the humanity. It’s an enormous dataset, and it’s ever-changing. It’s not static; it’s not like GDP [gross domestic product] figures or something that’s fixed at a certain point. It’s constantly evolving and reacting to the world, as humans react to the world. You were on the cutting edge of data journalism at The Guardian, and in those early days, you said that “data journalism is the new punk.” Do you still think so? Part of the appeal for me was that it lowered the barriers for entry to creating content. Anybody could access data and data visualization tools, and make visuals. It had that in common with punk, which was about anybody picking up a guitar and setting up a band. One of the things that I love about Trends data is that it is publicly available; anybody can use it and make anything with it. It’s probably the world’s biggest publicly available dataset. We don’t tell people what to do with it, which is why I think Google Trends has such a wide following. It’s not just journalists who use the site. It’s content creators. People working in NGOs. Marketers. We’ve seen the UN use it in Afghanistan when the U.S. withdrew, and in Ukraine when the war started, to look at how refugees searched in certain areas. The Pew Trust did a report based on Trends data from Flint, Michigan, and how people searched around the water issues there. It’s incredibly versatile as a dataset, but it’s publicly available and it’s transparent. And that’s one of the things I feel really good about every day. The Guardian As technology advances, are people changing the way they engage with the data? Definitely. The Organisation for Economic Co-operation and Development did an experiment where they would use Trends plus AI to generate weekly GDP figures, which are [usually] quarterly, and they wrote a paper on it. People are more data literate now than at any time in history, because of the amount of stuff that’s out there. But there’s a recognition that this data will tell you something about the world that you’re not going to get anywhere else. Because if you want to keep your finger on the pulse, this is literally the pulse. Is this thing you’ve built essentially just working in the background all the time? How much human work is involved? We can’t tell the data what to say. It’s a truly independent source. Trends is basically a sample of all searches—about a fifth of all searches—and it’s a random sample. [The data] is anonymized and aggregated. What that means is that you can see a global level, country level, regional level, and city level—which is a town in Google geography. But no lower than that. We don’t have demographics. We just know when something happened, and how big it was as a proportion of all searches. Even on the site, you don’t see raw numbers of searches, because that wouldn’t tell you anything. It does give the ability to compare a small place to a big place, in the way that people search for stuff. Or you can compare San Francisco to New York. You’ve written about how the data can show a lot of spikes in real time, but that those signals may not be as important as relative interest over time? Imagine an F1 race. The winners will be the top searches. But the “acceleration” would define whether something has trended or not. If something’s a breakout, it means it’s trended—it’s increased by 5,000% over time. [We] just launched a Trending Now section on the Google Trends site, and you can see what’s trending every day on there, whether it’s a soccer match or the government shutdown. Those things will just automatically show up there. With Year in Search, we use trending as opposed to top search. Because if you look at the top searches on Google, they’re always the same. It’s the weather. It’s people typing “YouTube” into their search bar. But with things like KPop Demon Hunters, that’s come from nowhere, spiked up, and it reflects the moment we were in. What does Google Trends tell us about how our attention spans have changed over the past few years? I don’t know that it reflects changes in attention spans, because we’re pretty ephemeral as humans. Part of the reason I did this book is because my mother died, and I found myself searching for a lot of things around dealing with grief. I could see that I was not alone. A lot of these things are constant, because they’re constants in our lives. We have kids, we have pets. We eat food. We want to help people. You [also] get these rhythmic searches. There are waves where, say, “how to learn piano” spikes ahead of Christmas, because people want to learn how to play piano for their holiday celebrations. Or certain health conditions, like [during] flu season. Hal Varian, who was the former chief economist at Google, wrote a paper on how there are a lot of economic factors that you can see spike in search before they show up in the official statistics. People searching for job seekers’ benefits will show up before jobless figures increase. But then there are things that just come and go. This year it’s Labubus or KPop Demon Hunters. Or the movie Weapons. If you were looking at Trends a few years ago, you would have seen a spike for searches in the “Cups” song [from] Pitch Perfect 2. Every teenager learned how to do the “Cups” song. It’s kind of a snapshot of history, in a way. Google Trends When you compile these lists, do you see a big difference between what’s trending in the U.S. and the rest of the world? Obviously, you get regional variations—if you’re looking for baseball, the U.S. is going to be tops. Some things are constant, like donations or helping or love. And then some things really vary, because of the conditions. For instance, I wrote in my book that you see spikes in searches for “food” from war-torn regions like Somalia or Ukraine. “Refugees” is more searched in countries where refugees go than in the countries where they originate from. I’m often curious about why something’s spiking in a certain place. Liverpool Football Club is more searched for in a town in Uganda than in Liverpool itself. There’s [also] a reflection of the spread of global culture. When you and I were growing up in England, “promposals” were not a thing, right? It was very much an American search, [where] you’d see a spike before prom every year. Now it’s a global phenomenon. It shows up everywhere . . . in Sweden, Germany, Australia. You sent me some of the 2025 lists, and I’ve got to be honest—I don’t know what half of these things are. There’s something on the Viral Products list that I had to look up: “beaded sardine bag”?! Do things surprise you, too? Luckily for us, my team is all younger, so everybody can explain stuff to me. This year in Year in Search, we’re planning to integrate AI mode explanations, so people click on a button and get caught up on what the trends are. You previously said that we’d never seen a year in search like 2020. Is that still true? 2020 was unique in a lot of ways. You saw these massive spikes as the economy reeled from COVID—things like “unemployment” and “food banks” were at a high. It was an election year. There was a lot of news. All these things were just spiking much higher than they would have done a normal year. Things like vinyl LPs went up, and they stayed higher. Tequila, as well. We also saw a spike in “loneliness,” but also people searching for “how to help.” Those have kept increasing. We tend to think everything is terrible, people are terrible. But that’s not what you see in the way people search. Often, people are looking for how to help other people, or even how to improve the way they interact with other people. Do you have any expectations for search trends in 2026? There’s a revolution happening in the way we search stuff right now, in terms of the way AI is being used. You can see search changing through the data: queries are getting longer [and] much more specific. We’re almost doing a cognitive offload to AI; we’re asking quite complex things to get answers for. This year is the 24th Year in Search. It goes back to 2001, when it was called Google Zeitgeist. It was just a list. Now 74 countries around the world will have their own Year in Search. Tell me more about your book. It’s not a book about technology, but it’s about how we use it, and what that says about us. It’s about everyday searches. We talk about the “sandwich generation,” which is my age group where you’re looking after your parents but also looking after kids—you see that in search. Originally, I was going to call it something like “Life Is Hard” because it also reflects that we don’t know how to do a lot of things. One of the top food searches is “how to boil an egg.” It’s a repeated search, which suggests that we’re repeatedly searching how to boil an egg. We need to be reminded of some of these things. When I was searching personally [about] grief, I felt quite alone. I could see from the data that I wasn’t, that there are loads of people doing the same thing. We worry about [a sense of] community and being part of a community. I think maybe we are part of communities; we just don’t always realize it. Whether it’s people who don’t know how to boil eggs, or people like me who search for weird Beatles recordings, or whatever it is. The boiled egg thing is real. Every time I boil an egg I’m, like, how many minutes again for hard-boiled? Yeah, and I must have boiled 500,000 in my life or something. It’s kind of nuts. I’m just thinking now, if you were an alien who landed on Earth and you were only given Trends information, you could probably follow a story of humanity. I actually used that in my book! If everybody had gone away, you could tell who we were from the way we searched. View the full article
-
The Browser Company’s Tara Feener is advancing search for the AI era
You might not spend a lot of time thinking about your web browser, whether it’s Safari, Chrome, or something else. But the decades-old piece of software remains a pretty important canvas for getting things done. That’s why Tara Feener, who spent years developing creative tools with companies such as Adobe, WeTransfer, and Vimeo, decided to join the Browser Company and within two years became head of engineering, overseeing its AI-forward Dia browser. “This is more ambitious than any of the other things I’ve done, because it’s where you live your life, and where you create within,” she says. Whereas a conventional browser presents you with a search box on its home screen, Dia will either answer your query with AI or route it to a traditional search based on what you write. You can also ask for information from your open tabs or have Dia intelligently sort them into groups. Several of these features have since found their way into more mainstream browsers such as Google Chrome and Microsoft Edge, and in September, Atlassian announced it had acquired the Browser Company and Dia (a $610 million deal), hoping to develop the ultimate AI browser for knowledge workers. Other AI companies are catching on to the importance of owning a browser. Perplexity has launched Comet, and OpenAI launched ChatGPT Atlas in October. This strategic value isn’t lost on Feener, who notes that browsers are typically the starting point for workers seeking information. They also provide a treasure trove of context for AI assistants. Dia can already do things like analyze your history for trends and draft messages in Gmail. Feener says her team has never felt more creative coming up with things to do next. “With Dia, we have context, we have memory, we have your cookies, so we actually own the entire layer,” she says. “Just like TikTok gets better with every swipe, every time you open something in Dia, we learn something about you.” This profile is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. View the full article