All Activity
- Past hour
-
Discover Core Update Data, Sitemap Tips & AI Risks – SEO Pulse via @sejournal, @MattGSouthern
This week's SEO Pulse highlights evolving AI link formats, cross-language sourcing biases in ChatGPT, and mounting pressure on traditional organic traffic. The post Discover Core Update Data, Sitemap Tips & AI Risks – SEO Pulse appeared first on Search Engine Journal. View the full article
-
What to Do When You Can't Feel a Muscle 'Working' While Exercising
You've probably heard that you should feel a certain muscle working when you do an exercise. Your biceps should burn a little when you're doing bicep curls, your quads when you're doing squats, and so on. But this isn't an ironclad rule. Sometimes you can get a totally effective workout without feeling any specific muscle at all. So why do so many people tell you to pay attention to feeling the muscle working? Partly because it can be a useful teaching tool to make sure you're doing the exercise right—but that's only true for some exercises. And honestly, another big reason is the influence of bodybuilding lingo and techniques on gym culture in general. Bodybuilders who train for the stage operate with a piece-by-piece mindset: Make sure you're working this muscle and not that one. That's OK if you're trying to fine-tune your physique after years of training, but that approach isn't needed to build muscle in the first place. So here's what you need to know. You may not always feel a muscle, even if it’s workingHere’s the most important thing to know: you don’t have to feel a muscle for it to be working. Say you’re doing a barbell squat. A squat works your quads, your glutes, and a lot of other muscles besides. You may not feel every one of those because when you’re doing a heavy squat, your brain is processing a lot of information. It’s feeling the weight of the bar on your back. It’s remembering the technique cues that you’re trying to focus on. It’s paying attention to your balance as you descend to make sure you don’t tip over one way or another. It’s counting the number of the rep in your head. Maybe sometimes a muscle manages to pipe up with “hey, I’m your quads and I’m kind of hurting right now.” But your brain does not have time to listen to every muscle’s nonsense, any more than a mom making dinner has time to listen to her toddler’s every whine. Your brain is focused on the task at hand: making sure you complete the rep. I like to think of some muscles as being “louder” than others. If I’m doing kettlebell swings, I might be more focused on the fact that my forearms are burning (from holding onto the kettlebell) and not feel my glutes working at all. But after 100 swings, hoo boy, you can bet my butt will be feeling like jelly afterward. It just didn’t give me that burning sensation in the moment. When it matters whether you feel the burn, and when it doesn’tSo what should you do if you don’t feel the muscle working? You look for another way to be sure the muscle is working. In the case of the compound exercises mentioned above, the fact that you completed the exercise is all the information you need. Your pullups used your lats. Your kettlebell swings and your squats used your glutes. There’s simply no way around that. Does it ever matter whether you’re feeling the muscle? Yes, it can help if you’re doing isolation exercises. In these exercises, like a bicep curl or a leg extension, you’re trying to focus a movement on one muscle or a small muscle group. You're "isolating" that muscle. Your brain is a little more able to focus on the feeling from that one muscle, and isolations are the type of exercise where it may be possible to do a similar movement without working the target muscle. For example, let’s say you’re doing side-lying leg raises to work your hip adductors, particularly the gluteus medius. If you have your hips tilted or your legs angled slightly forward, you may feel the muscles toward the front of your hips working. But if you do the same exercise with your back to a wall, sliding your heel along the wall as your lift your leg, you’ll feel it a lot more in that glute you’re trying to isolate. As a general rule, for compound exercises (where many muscles are working at once), it doesn’t matter whether you feel the muscle. But if you are doing an isolation exercise, feeling the muscle is helpful feedback to make sure that you are isolating the right muscle. Don't reduce the amount of weight just to feel the muscle workThere’s a lot of bad advice out there, and I’d like to call out one thing specifically: the advice to reduce the amount of weight you’re lifting so that you can feel the muscles better. Sometimes people will say it’s important to build a “mind-muscle connection.” But you don’t have to forgo weight on the bar to build that connection. If you’d like to spend more time feeling the muscle, do some isolation work in your warmups. (These are sometimes called “activation” exercises.) You can also do extra isolation work at the end of your workout just to give those specific muscles a little more volume. It’s important to remember that different parts of your workout have different purposes. If you’re squatting heavy, you need to put some fucking weight on the bar to keep building your strength and your skill at squatting. Often the lifts that make it hardest to feel a muscle are the lifts where that muscle is working the most! So don’t give up on heavy, effective lifts just because you don’t “feel” them as well as isolations or warmups. View the full article
-
How to become an SEO freelancer without underpricing or burning out
Many SEO professionals enter freelancing for the same reason: freedom. They dream of fewer meetings, flexible hours, and the ability to choose their own projects. What they don’t expect? Freelancing isn’t just “SEO without a boss.” It’s SEO plus sales, scoping, contracts, billing, and client management. Without those essential pieces, even the strongest SEOs struggle to make freelancing sustainable. We’ll break down each step in this process to bridge the gap between dream and reality. By the end of this article, you’ll know exactly how to build a sustainable freelance practice so you can become a digital nomad answering client emails and enjoying mojitos from a beach in Bali (if you so choose). Before you get started: Understand what you’re actually building Let’s make one thing clear: SEO freelancing doesn’t look like attending quarterly planning meetings to fight for budget or sending another sad Slack to the product team asking them to prioritize your recommendations. In that scenario, you’re closer to a contractor embedded in someone’s workflow than an independent freelancer. And that distinction matters. It determines how much control you have over your time, scope, and pricing. SEO freelancing typically includes: A clearly scoped engagement with a defined start and end. Ownership over how the work is delivered, not just what’s delivered. Pricing tied to outcomes or deliverables instead of availability. The ability to say no when a project doesn’t fit. So before you quit your job to take on your first client, make sure you know exactly what you’re signing up for. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with Step 1: Pick one thing and get unreasonably good at it Now that you know exactly what your SEO freelancing gigs should look like, here’s the secret sauce to how some freelancers can charge $200/hour while others still struggle to get $40: Specialization. Generalist freelancers compete on availability and price. “I do SEO” means you’re fighting everyone who just “does SEO.” You win projects by being there when the client needs someone — and your price is what they’re willing to pay. Specialists, on the other hand, compete on expertise, speed, and pay-off. An expert who “audits JavaScript rendering issues for React migrations” will face a much smaller pool of competitors. Because of that, you can price based on what you’ve delivered. When it comes to SEO freelancing, those high-value specializations look like: Technical SEO audit for site migrations: Companies budget for migrations because they’re terrified of what could go wrong. They pay well for any de-risking an expert can offer. Programmatic SEO implementation: Sites make money from organic traffic at scale, so they understand well the ROI of investing in your services. Technical enterprise ecommerce SEO: These high-stakes sites with complex templates, faceted navigation, and crawl budget demand high budgets and timely deliverables. SEO that actually gets you ChatGPT visibility: Yes, GEO is a selling point that everyone wants to buy, and yes, offering that specific skill (and backing it up with data) will put you on the map. What doesn’t work? SEO “guru” positioning: Claiming broad expertise without clearly defining the problem you solve or the outcome you deliver. Lack of specialization: Offering every SEO service under the sun with no defined specialty makes it harder for prospects to understand where your expertise actually lies. Competing on price: When price is your main differentiator, you’re positioning yourself as interchangeable instead of valuable. Experience-driven specialists rarely win or lose work based solely on their hourly rate. Most freelancers resist freelancing, thinking, “What if I turn away work?” You are! That’s the point. Turning down misaligned work is how you protect your time, pricing, and the quality of your work. Dig deeper: How to keep your SEO skills sharp in an AI-first world Step 2: Turn that one thing into something you can sell 100 times The line between “I’ll do an SEO strategy customized to your needs” and “I deliver a technical SEO strategy with these eight components, this deliverable format, and this timeline” is productization. It’s the difference between delivering consistent, repeatable work and reinventing the wheel for every new client. Many freelancers misstep here by customizing too early. A client might say, “We also need help with content,” and you, as a freelancer, reply with “Sure, I can help with that.” Now you’re not delivering a productized audit — you’re doing custom work with an undefined scope. Here’s what you need to define to keep your deliverables consistent: Scope: What’s included in the work. Deliverable format: What the final product should look like (e.g., prioritized spreadsheet, slide deck, kickoff call). Timeline: Define this at the very least as starting from the moment the client signs your proposal. Price: We’ll get into this can of worms in a second. Depending on the services you’re offering, you’ll also want to specify: Content audits. Competitive analysis. Keyword research. Implementation support. Ongoing monitoring. Additional stakeholder presentation. The key to building out a strong productized proposal is this: you cut back on ambiguity. The prospect either needs what you’re offering, or they don’t. If they need more, you can follow up with another proposal including the additional pricing. Tip: If you do have a client asking, “Can you also look at our blog content, subdomain, redirects, or something that’s outside of the scope of this current project,” you don’t have to say no. You can say, “Yes, but that’s another project that I’ll need to scope out.” Just make sure you say anything but “Sure, I can take a quick look.” Resist. Dig deeper: How to build lasting relationships with SEO clients Get the newsletter search marketers rely on. See terms. Step 3: Price it like you’re running a business Arguably, this is the trickiest side of freelancing. It can be hard to put a price on your time and expertise — and even harder to defend your pricing while selling your services. There are three pricing models you can try here: hourly, project-based, and retainer. Most start with hourly since that’s the easiest to understand, and yes, that is a bit of a trap. Hourly pricing: Good for beginners, terrible for experts Setting an hourly rate makes sense when you’re starting out and aren’t sure how much to charge. Simply take your day job, narrow down how much you get paid by the hour, and think about how much your benefits are worth to you. Add all that together, and boom! Hourly rate. For example, say you got paid $100,000 at your full-time job. That’s about $48 per hour. And the average cost per hour for private industry benefits is about $13. That means if you want to make exactly what you were before, you’ll need to be paid at least $61 per hour. In practice, SEO freelance rates range from $75 to $200 per hour, though entry-level freelancers might start closer to $50. Consider your experience and expertise, and price yourself carefully so you don’t get locked into a too-low rate. Hourly rate is great to start, but it falls short when you’re good at your job. You’re being rewarded for working slower and being penalized for getting better at your job. Project-based pricing: The model for productized work Once you’ve productized your products, you can start using project-based pricing. If you’ve delivered the same audit 15 times, you know how much work it takes you — and you know how much it’s worth. The client doesn’t care if something takes you 20 hours or 15. They care about getting a quality deliverable in a timely fashion. But it can be hard to get out of that hourly mindset. Here’s how to price projects when you’re starting out with freelancing: Estimate how long the work will take you (or go with your best guess if you’ve never done it). Multiply that by 1.5 times to account for communication overhead, revisions, and unexpected complexity. Track actual time spent (yes, even though you’re not charging by the hour). Deliver the project. Adjust pricing for the next client based on real data (and client results). After your first five projects, you’ll know your actual costs. Up until then, you’ll be making educated guesses, but that’s OK. Everyone starts by guessing. Tip: Remember, the thing you’re charging for here is your knowledge, not your time. What the client is paying for is the results you offer. Always tie your work to how it can help your client achieve their goals. No one can put a price tag on exceeded KPIs. Retainer pricing: Useful for recurring work, but dangerous without boundaries Retainer pricing makes sense when the client needs consistent monthly deliverables, such as technical reviews, advisory support, and optimization recommendations. You just have to be careful here to avoid scope creep. “We’re paying you $5,000 a month” can quickly turn into “Can you help with this product launch, this email campaign, this competitive analysis?” Guard your time wisely. Here’s how to structure your retainers so they work for you: Define the exact monthly deliverable: Clearly outline the tasks you’ll be working on each month. For example, “one technical audit per month” or “three page reviews a month.” Set rollover limits: Explain what happens if tasks are put to the wayside or projects get put on pause. This might look like saying “unused hours expire after 60 days” or “a maximum rollover of one month’s unused hours.” Exclude ad hoc requests: Clearly note that additional projects require separate proposals. For example, say you have a client who pays $6,000 a month for “monthly technical SEO review and eight hours of advisory support.” Month 1: The client uses six hours. Those two unused hours roll into month two. Month 2: They use 10 hours (unused two hours plus standard eight hours). Month 3: The client asks for a content audit. That project is separate and has its own pricing. The best path here for a new SEO freelancer? Start with project-based pricing for your core offerings. Add retainers only after you’ve delivered the same project multiple times and you know exactly what you’re committing to. Tip: Only offer retainers when you know you can firmly hold a client to a set scope of work. Be confident in what you’re selling and how long it takes to deliver, so you make the best use of your time. Dig deeper: 7 ways to increase SEO revenue without losing clients Step 4: Build systems before you’re underwater The key to keeping all of this consistent? Systems. As a freelancer, you are the project manager, account manager, and delivery owner. Systems are what keep work moving when no one’s checking in on you. Here’s what you need to create a solid system so nothing slips through the cracks: Client onboarding. Email (follow-ups and replies). Billing. Contracts. Deliverable templates. Offboarding. Client onboarding: Get everyone up front The biggest delay to any project? Waiting on access for tools, documentation, and basic questions. The right onboarding process means you can hit the ground running. Here’s what you should always ask for before work starts: Tool access: Google Search Console, Google Analytics 4, crawl tool permissions, CMS login. Stakeholder contacts: Who approves deliverables, who answers technical questions, who handles billing. Project context: Known issues, previous SEO work, business priorities, previous project timelines (migration, updates, product launches). You can get this without seven days of email tennis. Just send over an immediate request for this information, and don’t schedule any next steps until you have what you need. Template everything here. Each client gets the same questionnaire and contract structure. Contracts You know what every freelancer loves? Getting paid. You know what you need to get paid? Getting it in writing. Set your contract terms ahead of time so you don’t just hit a prospect with “uh” when they ask you how much and when. Here’s what you should have prepared: Payment terms: Common options include 50% upfront and 50% on delivery for project work, or monthly invoicing for retainers and recurring work. Choose a structure that protects your cash flow while remaining reasonable for your clients. Deliverable format and timelines: Net-30 or Net-14 are standard terms here. They’re just fancy ways of saying you get paid thirty days or two weeks after you bill. Communication expectations: Explain the meeting cadence, preferred channels, and response times to avoid surprises. What’s not included in your scope: Just so everyone is completely clear on what work is being done and what isn’t. And don’t feel married to the first contract term you define. Be flexible. That’s the joy of being a freelancer — you can always change things up when you need to. You can either Google Docs your way to success here, or you can look into investing in tools: Contract signature: PandaDoc or DocuSign. Invoicing and payment tracking: Wave, FreshBooks, or Bonsai. Note: Pick one of each, use it for every client. Don’t switch unless you have a reason. Deliverable templates Deliverable templates save hours of formatting. It means you don’t need to mentally go through your checklist of everything you need to review. You can just look at a blank template of what you’ve done in the past and move forward. Here are some good examples of templates to have on hand: Audit spreadsheet with consistent columns: Include the issue, location, impact (high, medium, low), effort to fix (usually in hours), priority, and any additional notes. Executive summary templates: This should just be how you break things down for the client in layman’s terms. Delivery email template: This offers next steps and support window details. The goal here is to keep things consistent across clients. You’re providing the same quality work every time, no matter how busy you are. Communication Clients don’t need daily check-ins. They need to know the project is moving forward and nothing important is blocked. What that looks like depends on the client’s needs. It could be: Weekly async updates via email: Explain what was completed this week, what’s coming up next, and what’s blocked. Biweekly or monthly calls: Explain the same things, but this time over the phone. You should also schedule a call if you’re doing a kickoff or delivering a project. Monthly emails: This is better for hands-off clients that you trust (and trust you) to get things done. Note: If a client is pushing for daily Slack access or unscheduled calls, review your scope and pricing. You can always update your scope of work if new needs arise. Offboarding No one likes to see a client go, but how you handle parting is key to making a positive, lasting impression. Make sure to include: Final deliverable handoff: This should include the rest of your work and a video walkthrough if you didn’t have a chance for a call. Transition documentation: If you were working with another team to implement your recommendations, provide guidance on how to implement changes and include any technical context they’ll need to know. Post-project support window: Define a clear support period (e.g., “two weeks of email support for clarification questions about the deliverable”). After the window, additional support is a new engagement. Request feedback: Ask for a testimonial or LinkedIn recommendation while the work is fresh. Most freelancers wait too long. Make sure to document what you’ve learned about yourself, the client, and your process once things are done. Think about what went well, what went poorly, and what to charge your next client for similar services. Dig deeper: 12 tips for better SEO client meetings Avoid these pitfalls Most freelancers go back to full-time employment because they feel burnt out, underpaid, and overworked. Those who build a sustainable career treat freelancing like a business, not just a flexible job. Yes, drinking your mojito in Bali is fun — but you still need to answer client emails within 24 hours, even when you’re off the clock. The biggest pitfalls that almost all beginner SEO freelancers fall into are: Saying yes to misaligned projects: Beginner freelancers are usually worried about cash flow, but saying yes to a project that doesn’t fit is what gets you stuck in a feast-famine cycle where short-term cash flow decisions prevent you from building stable, repeatable work. Delivering different things for each project: You can’t optimize what you don’t understand. Keep your offering consistent so you know what works, what doesn’t, and what’s just a client quirk. Starting from scratch with each client: Every new client should feel easier. If onboarding Client No. 5 feels as chaotic as Client No. 1, you need a better system (or just any system). Pricing for payment and forgetting sustainability: Pricing too low to “get your first client” can get your legs under you, but it’s not how you stay in freelancing. It’s better to work on two well-priced projects than five underpriced ones. Carefully judge your workload — and savings — so you can hunt for the right client. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with What you’re actually building as a successful SEO freelancer Freelancing isn’t just “SEO with flexible hours.” It’s a service business where you define the offering, set the terms, and manage the business. If that sounds like more work than having a boss, you’re right. Freelancing means trading predictable employment for control over everything: scope, pricing, schedule. Some people thrive on that trade because they get to be their own ultimate manager. Others realize they’d rather someone else handle that for them. Both are valid choices. The key here is if you’re going freelance, treat it like the business it is: Pick a specialization. Turn it into a repeat project. Price it properly. Build systems that scale. Say no to everything that doesn’t fit. That’s the framework. The rest is execution, iteration, and always improving the parts of the business that speak to you — be that SEO audits, content strategy, link building, or even client management — to build something sustainable. View the full article
-
How Starbucks designed its new iconic cup and big comfy chair
Since taking over the coffee chain in 2024, Starbucks CEO Brian Niccol has been on a mission to go “back to Starbucks” and rekindle the feeling of warmth inside the coffee giant. That’s led to new store designs, new employee training, new uniforms, new menu items, and new staffing—which have helped the company break out of a two-year sales rut. But as part of this deep strategic exploration, Niccol made two specific asks for Starbucks’s cross-discipline design team that are being revealed today: an iconic new cup and a new plush chair. As the literal touchpoints between the consumer and the company, “they are the biggest signals we have of warmth, comfort, and generosity,” says Dawn Clark, SVP of global concepts and design at Starbucks. The new Starbucks cup (ceramic in every size) The new Starbucks cup is not just one cup, but five different glazed ceramic options—each offered to customers who stay to enjoy their coffee. Built to accommodate drinks ranging from a single shot of espresso to a venti latte, the cups come in white (inspired by their takeaway cup, with a hand-painted green siren and rim), and green (where the siren is embossed). Notably, the cups all share the same tapered silhouette. Clark says the cup design took inspiration from a blend of Italy’s espresso culture and Starbucks’s own mercantile and coffee trading history. The result lands somewhere between European sensibility and American utility. After concepting different designs, they came up with four frontrunners which they 3D printed and shared with various stakeholders across the company—ranging from corporate executives to on-the-ground baristas. They refined the designs and rendered them in ceramic before making the final choice. The company knew it wanted a single, strongly branded silhouette across every size, which limited what could work. “It’s a really big design challenge because not all those forms that looked good in a short or tall looked great in a mini or large size,” Clark says. The other, perhaps bigger problem was drinkability. Different geometries affect how the coffee flows into your mouth, and those geometries don’t always scale well. They also needed to survive countless rounds of dishwashers. The wide-mouth, tapered design won out because it satisfied every above requirement. But most of all, Clark says it was just a really nice vessel for drinking, shaped to make the coffee “go with the flow” perfectly from the cup to your lips. From what I gathered, Starbucks may eventually choose to sell these mugs as merch, and it’s easy to imagine the company introducing special colorways for limited-time offerings. A toasty orange version for PSL season feels almost inevitable. The new Starbucks chair (in green this time) While cups are intrinsic to coffee, the new Starbucks chair requires a bit more explanation. Even brand devotees may have forgotten a piece of lost history in Starbucks lore. In the ’90s, when Starbucks took lattes mainstream across America, many stores had one or two special, extra-wide, purple velvet chairs. They were an almost Dr. Suessian take on the hyper plush living room seating of that decade, meant to shake up the rigidity of Starbucks’s design at the time while urging you to stay a while. “What was great about that chair is it was oversized; it wasn’t practical. It was very much like you could maybe have two people sit in it, you could put your feet up, swing your legs over the arm. There were a lot of ways to occupy it,” Clark says. “That was a big part of the inspiration [for a redux]—and also the lushness of the texture.” Indeed, Niccol told me last year that an updated chair needed to imbue something akin to FOMO when sitting down at Starbucks: “It’s got to be the seat that when you walk in, you’re like, ‘Man, I can’t wait for him to get up. I’m hopping in that chair the second he does.’” Starbucks landed on a design that resurrects hefty ‘90s furniture and adds a dollop of midcentury design. I find myself sucked back into 1996 just looking at it. You see the same voluptuous arm silhouettes from the original chair (don’t worry, they’re still fixing that ruching), but it’s framed in wood (albeit with far more weight than you’d see in traditional midcentury design—or even the rest of Starbucks’s midcentury-inspired furnishings). The visual heft of the entire chair is intentional, built to exude confidence that it can accommodate your most leisurely posture. “It’s a little overly generous in its invitation to be comfortable,” Clark says. Like the cup, Starbucks developed the new chair in-house. The process began with an adjustable ergonomic model. Built from a CMF frame and sparse cushioning, it looks straight out of IKEA, but the system allowed the team to study how it would feel to sit (and eat and drink) at various angles. From there, they built a cardboard massing model to lock in its curves and proportions. For the final production sample, the company went with its rich Starbucks green because, gosh is that purple a statement. But more colors could enter the mix in the future. No doubt, this is a premium chair for a QSR restaurant—most stores may get one or two. Its inevitable cost and maintenance is probably why Starbucks ditched their purple chair years ago, which I recall looking pretty gnarly before they up and disappeared. Clark believes its new velvet fabric will be easier to clean, and that Starbucks locations can get five to ten years out of a chair before retiring it or even reupholstering it. However, she also insists that isn’t their chief concern. “Part of what we’re in a way saying, it doesn’t exist to be convenient or easy to maintain. It exists to provide comfort. And we’re willing to take on the challenge,” Clark says. “Of course we designed it to be up to the test for all the use it gets, and we’ll have to take care of it . . . but it’s something we’re committed to.” The new cups and chairs will arrive in U.S. stores toward the end of 2026, while the cups are slated to go abroad in 2027. And they’ll undeniably add a little more oomph to Starbucks’s turnaround, as it works to make its cafes once again a place you want to sit and stay a while. “I think that it really is more than just a chair or cup,” Clark says. “These are the most intimate things. These are the things you occupy or touch. We feel these are really intrinsically linked to everything about our brand.” View the full article
-
Inside OpenAI’s fast-growing Codex: The people building the AI that codes alongside you
OpenAI’s Codex AI coding assistant is having a growth spurt. OpenAI tells Fast Company that its weekly active users have tripled since the start of the year, while overall usage (measured in tokens) has increased fivefold. The surge is likely driven by the release of new models—GPT-5.2 last December and GPT-5.3-Codex in early February—as well as the launch of Codex’s app version a few weeks ago. OpenAI says the app has been downloaded more than a million times. Across all access points—including the cloud, app, and command line—more than a million developers and other users now rely on Codex at least once a week, according to the company. Generating computer code has emerged as one of the first AI applications making a measurable impact in business. But tools like Codex and Anthropic’s Claude Code have evolved far beyond simple code generators. Powered by more capable models, they function more like assistant engineers—able to converse with developers in plain language about a new software project and iteratively develop a plan. The agent can then execute that plan, which may include analyzing a broader codebase, writing and revising code, conducting research, running tests, and producing documentation. When finished, it can explain its reasoning and the decisions it made to the human engineer. More importantly, Codex has evolved into an agentic platform, where multiple agents can carry out many of these tasks simultaneously across different pieces of a software project. They can hunt for bugs, for example, while an engineer reviews progress, focuses on another assignment, or steps away for lunch. Peter Steinberger, the OpenClaw creator and an elite-level coder, calls this new mode of working “agentic engineering.” Thibault Sottiaux The tools have evolved quickly. Codex and Claude Code both launched in the first half of 2025. OpenAI had previously introduced a Codex model in 2021—the system that powered the early AI coding assistant GitHub Copilot—but the Codex coding assistant that exists today debuted in May 2025. Thibault Sottiaux, who leads the Codex group at OpenAI, says the product got a major boost with the December 2025 release of the GPT-5.2 model, which he says can hold more project data in memory and reason over it more effectively than earlier versions. “The model was more reliable—working by itself autonomously and reaching really good results,” he tells Fast Company. Codex’s user base broadened again with the February 2 release of the Codex desktop app for Mac, which OpenAI describes as a “command center” where users can deploy and manage multiple agents. The company says more than half a million people are now accessing Codex through ChatGPT’s Free and Go subscription tiers, and it believes many of them are non-coders, since power users typically rely on higher-priced plans that offer greater usage limits and faster speeds. The biggest bang came with the February 5th launch of GPT‑5.3‑Codex, which substantially improved Codex’s coding chops, as well as its capacity for reasoning its way through complex, long-running tasks that involve research and tool use. In X posts and Reddit discussions many developers raved about the tool’s capacity for quickly writing usable code for real-world projects, often on the first try. Codex vs. Claude Code Many of the AI coding agents on the market are powered by third-party models, but OpenAI and Anthropic, along with Google and its Gemini Code Assist product, are each trying to leverage the strengths of their own frontier large language models to deliver the most capable and reliable coding tool. OpenAI’s Codex and Anthropic’s Claude Code share some broad similarities. Both can build large features or even entire apps based on plain-English conversations with a user. Both also allow developers to break complex projects into subtasks and assign those to agents. But there are differences. One major distinction is the look and feel, or what some describe as the “personality,” of the tools. Steinberger says Claude Code is more conversational and iterative than Codex. It includes, for example, a dedicated planning phase before any code is written. Codex, by contrast, does not formally separate planning and coding and instead tends to dive directly into the codebase to gather context and begin working. Steinberger (comically) described the difference this way on a recent episode of Lex Fridman’s podcast: “Opus [Anthropic’s flagship Claude model] is like the coworker that is a little silly sometimes, but it’s really funny and you keep him around,” he said, “and Codex is like the weirdo in the corner that you don’t wanna talk to, but is reliable and gets shit done.” (OpenAI has since acquired Steinberger’s OpenClaw agent platform, and Steinberger now works at OpenAI.) “The pragmatic personality has always been the personality that we have on Codex,” Sottiaux says, “which is very much focused on having the model point out flaws and being as correct as possible when it comes to discussing something and being a very reliable tool.” The personality and interaction habits of AI agents can reflect the markets they’re designed to serve. “We were just really focused on this professional software engineering audience and . . . on getting to a powerful agent that can do tasks independently,” Codex product manager Alex Embiricos says. But those target markets can shift. Embiricos says that while a pragmatic approach works well for experienced developers, less experienced or first-time coders may prefer a more empathetic, conversational interface. And that audience is growing as Codex evolves into a tool for general information work. That’s one reason the Codex team decided to give users more choice within the app. “In January we said ‘Okay, we’re doing great on intelligence; obviously there’s more to do, but now we’re going to actually spend a few more cycles on personality,'” Embiricos says. With the arrival of the GPT-5.3-Codex model, Codex now offers the default “pragmatic” personality as well as a new “empathetic” or “friendly” mode, which is designed to be more conversational and interactive. Why are AI models so good at coding? At the most basic level, computer code is made up of words, the same kind of data large language models are designed to process. And because the people building AI models are themselves programmers, they have strong incentives to make their systems excel at coding. Computer code is also in training and evaluating models. While there’s creativity involved in software engineering, code ultimately either works or it doesn’t. That creates a large supply of training examples with clear right and wrong answers. “There’s lots and lots of examples out there with a problem statement and a solution, and being able to tell whether the solution is correct or not,” Sottiaux explains. “So you can at the very least use that for evaluations to understand the performance of models over time, and drive that performance up.” Amelia Glaese Codex is still a young product, and OpenAI says it’s improving quickly. But it’s still a work in progress, and in the weeks since the GPT-3.5-Codex model upgrade, developers have reported problems in some coding scenarios. Some users say GPT-5.3-Codex can lose focus during long or complex tasks, get stuck in loops, freeze, or repeatedly ask for approval instead of completing work. Others say it can hallucinate plausible-looking code, especially in front-end fixes, that doesn’t actually work. These accounts are anecdotal and not systematically measured, but they underscore a common practice among developers of keeping AI-generated code separate from production systems until it’s reviewed. The Codex team has been focused on identifying and removing near-term bottlenecks that limit usefulness, according to research scientist Amelia Glaese, who leads development of the models underneath Codex. “You know, three months ago, people were using Codex, but they were using it a lot less than they are using it now,” Glaese adds. “There were changes that we made two months ago and two weeks ago that made it so much more useful to people.” At the same time, tools like Codex and Claude Code require developers to adapt. Working with an AI coding assistant is a different mode of software engineering, one that involves guiding and collaborating with an agent rather than writing every line directly. “It’s not the case that there’s like one right way of solving an engineering problem,” Sottiaux says. “It’s all a question of trade-offs and exploring those trade-offs, and so when you have an agent that’s capable of helping you explore those trade-offs, it’s a very useful tool for an engineer.” Increasingly, these assistants are capable of contributing to the development of the next generation of AI models themselves. If AI systems eventually handle more of the process of building, training, evaluating, and deploying models, the pace of performance improvements could accelerate significantly. Not just coding Both Codex and Claude Code are evolving into tools for general information work. Anthropic has drawn significant attention as it rolls out new Claude Cowork plugins (bundles of information-work skills) such as for sales, finance, and legal work. Cowork appears as a separate tab, alongside Claude Code, within the Claude chatbot interface. Anthropic’s skills announcement helped trigger a sell-off in software stocks, reflecting investor fears that traditional software-as-a-service products could be displaced by AI tools sooner than expected. OpenAI is also adding information-work skills to Codex, if more quietly. “Skills bundle instructions, resources, and scripts so Codex can reliably connect to tools, run workflows, and complete tasks according to your team’s preferences,” the company wrote in the blog post announcing the GPT-5.3-Codex model. The Codex app includes a dedicated interface for creating and managing these skills. OpenAI already has a large and expanding portfolio of products, but it considers Codex important enough to feature in its “You Can Just Build Things” Super Bowl ad this year. Glaese, for her part, points out that software engineers themselves have a natural incentive to expand Codex beyond coding tasks. Much of their workday involves general information work rather than writing code. “We have to do research, we have to understand the market, we have to read news, we have team meetings, we do performance reviews—we do all of the things that people who don’t code also do,” she says. The glaring question around agents like Codex and Claude Code is how they will affect human jobs, especially those of younger engineers. OpenAI wants its agent to behave like a talented assistant engineer but stops short of saying it will replace people. Instead, Sottiaux sees coding agents as a way to expand how teams approach problems and develop new ideas, particularly when less experienced engineers use them to experiment and push beyond conventional approaches. “And then they come up with completely new ideas that you might not have if you anchor too much on your decades of experience,” he says. View the full article
- Today
-
Job hunting 101: Dealing with the 5 stages of grief after a rejection letter
When the email pinged in my inbox, I didn’t even bother to open it immediately. I already knew what it was. One glance at the subject line told me everything. After enough time on the job hunt, you develop a sixth sense for HR language. The preview text—“Thank you for taking the time…”—said it all. It’s the standard soft intro to bad news: Your application was amazing . . . but not amazing enough. The blow softens once you’ve received a few of these. But the emotions that follow resemble the five stages of grief: denial, anger, bargaining, depression, and eventually, acceptance. I ran the gamut of these feels when I got my latest rejection for a role that seemed promising all the way through the final interview. Here’s how I felt and acted after I opened that message and faced reality. Denial Nah, this can’t be right. I refresh my inbox three times, as if the letters in the message will magically rearrange themselves into a sequence that reveals a start date. Could it be a system glitch? Maybe they sent this to the wrong candidate? (Believe it or not, it’s happened to me before.) I mean, I was perfect for this role. Remember in the final interview when I gave that answer about cross-functional collaboration that made the hiring manager nod so hard I thought she had that new J. Cole playing in her AirPods? I draft a response. “Thank you for your consideration. However, I believe there may have been an error . . .” I let it sit in my drafts folder for exactly 11 minutes before deleting it. Even my delusions have limits. But I do check LinkedIn to see if they’ve posted the position again. They haven’t. Which means they hired someone. Which means this is real. Which leads me directly to . . . Anger I’m in my feelings now. Who did they hire? I need to know immediately. I’m on LinkedIn doing forensics like I’m on The First 48. I filter the company’s employees by most recent hires. There he is. Brayden. Of course it’s a Brayden. His profile says he “thrives in ambiguous environments” and has experience with “stakeholder management.” My profile says the exact same thing but with better action verbs. Ugh. Bargaining Okay, let me think about this objectively. What could I have done differently? Maybe I shouldn’t have mentioned I needed to check the start date because of a vacation I had already booked. Maybe that made me seem uncommitted. Or maybe I should’ve asked more questions at the end—did I seem too confident? Not confident enough? Maybe I talked too much . . . or too little. Should I have laughed at the hiring manager’s joke about “getting her ducks in a row?” It wasn’t funny, but maybe that was the test. I consider emailing the recruiter to ask for feedback. Just a friendly note. “Hey! Would love to learn what I could improve for next time :)” The smiley face is crucial. Makes me seem coachable and not at all dead inside. I type it out. I don’t send it. I know what they’d say anyway: “We had many qualified candidates.” Translation: “Brayden’s uncle plays golf with the CEO.” Depression It’s been three days since the rejection. I’m still thinking about it. I’ve applied to 16 other jobs since then. Each one feels like I’m rolling up a resume, stuffing it into a Dos Equis bottle, and chucking it into the ocean. My “Easy Apply” count on LinkedIn is getting embarrassing. I’m tailoring cover letters for positions I’m overqualified for, underqualified for, and in some cases, not even sure what the job actually is. “Customer Success Champion” could mean literally anything. I think about Brayden again. Brayden’s probably in orientation right now, getting his company laptop, meeting the team, hearing about the unlimited PTO that no one actually takes. Brayden’s probably not wondering if his name sounded too ethnic on the application. Brayden’s probably not calculating whether the commute is worth it while also knowing he won’t get the offer anyway. Brayden’s just . . . winning. I eat leftover jerk chicken at 11 a.m. and consider whether this is rock bottom or if rock bottom is a few more rejection emails away. Acceptance (sort of) Here’s what I know: This isn’t personal, even though it feels personal. Corporate America isn’t rigged. It just tends to work out beautifully for guys named Brayden. That company wasn’t the one. Maybe the role wasn’t even that good. The Glassdoor reviews mentioned “fast-paced environment,” which is code for “no work-life balance” anyway. I update my resume again. Not because I think it’ll make a difference, but because I need to feel like I’m doing something. I tweak one bullet point. I remove an unnecessary comma. I save it as “Resume_FINAL_v3_ACTUAL_FINAL_Feb2026.pdf” knowing damn well there will be a v4. And then I do what I always do: I apply to another job. Because there’s only one thing worse than getting rejection emails, and that’s not getting any emails at all. View the full article
-
No, AI is not about to kill the software industry
Hello again, and thank you, as always, for spending time with Fast Company’s Plugged In. In a remarkably influential 2011 Wall Street Journal op-ed, Netscape and Andreessen Horowitz cofounder Marc Andreessen declared that software was “eating the world.” From entertainment to commerce to transportation, he argued, startups that were about code at their core were disrupting many of the world’s most deeply entrenched businesses. That was just the beginning, he warned: “Companies in every industry need to assume that a software revolution is coming.“ Fifteen years later, we know that some of the disruptors Andreessen cited—such as Zynga, Groupon, and Skype (RIP)—did not, in fact, eat the world. His larger point, however, played out much as he predicted. Software really does run everything these days. And many of its purveyors are among the most successful companies in the world. Recently, however, Wall Street has been spooked by the possibility of another sea change in the making: AI might be on the verge of eating software. The sudden leap forward in the capability of software-writing LLM tools such as Anthropic’s Claude Code has investors worried that the corporate behemoths presently making tidy profits by selling subscription-based software—particularly for enterprise customers—might find themselves unable to compete with apps coded by AI for very little cost. This theoretical collapse of the software industry is known as “The SaaSpocalypse,” a name I hate but can’t quite avoid acknowledging. (I promise not to bring it up again.) It’s reflected in the stock performance of such seemingly robust companies as Workday (down 35% year to date), Adobe (-26%), Salesforce (-25%), Autodesk (-21%), and Figma (-19%). On February 23, after Anthropic published a blog post touting Claude’s ability to modernize software written in the 66-year-old COBOL programming language, IBM—COBOL’s kingpin for most of that time—saw its biggest one-day stock drop in more than a quarter century. Investors are right to expect that AI will radically change software as a business in the coming years. The evidence is already here, in the form of developments such as Block—the parent company of Square—announcing on February 26 that it’s terminating 40% of its 10,000 employees. Explaining the brutal reduction, CEO Jack Dorsey contended that AI will allow a smaller team to accomplish more and do it faster, and said he was getting ahead of an inexorable industry-wide trend. What happens next remains to be seen, but Block will surely never be the same. Still, Wall Street’s apparent belief that AI spells bad news for today’s software titans is premature, and possibly just misguided, period. It’s certainly heavy on vibes rather than hard data: Monday’s dip in the S&P 500 apparently stemmed in part from a dystopian imaginary June 2028 memo published by Citrini Research. Laying out a sweeping nightmare involving AI crushing the U.S. economy, it name-checked specific companies such as DoorDash and Zendesk as being incapable of competing with AI-infused apps and agents. Well, maybe, though even the document’s authors admitted they were “certain some of these scenarios won’t materialize.” In a little over two years, it will be possible to assess what Citrini got wrong and right. For now, it remains equally possible to imagine futures in which 2026’s software-based kingpins aren’t mowed down by AI, even if the technology’s coding chops will continue to improve indefinitely rather than hitting a wall. For one thing, the software business isn’t solely about writing software. It requires selling it—sometimes in the form of hefty annual contracts—and supporting it when things go wrong. It will be difficult for AI (or even most AI-savvy startups) to take on these tasks outside of the human-powered infrastructure that major software companies have built, often over decades. In Sun Microsystems cofounder Scott McNealy’s memorable phrase, enterprise customers like having “one throat to choke”—someone with the bottom-line responsibility of making them happy. They wouldn’t get that by vibe-coding their own in-house replacements for major apps, or buying them from a tiny company offering look-alike equivalents. Instead, they have a powerful incentive to keep doing business with companies that have already shown an ability to deliver. People who use AI to write their own apps might even develop a newfound appreciation for all the ways software suppliers make their lives easier. For instance, last April I wrote about the note-taking app I’d vibe-coded for my own use, and said I’d put it together in a week. What I didn’t know at the time was that I’d spend the next 11 months fiddling around with new features, squashing bugs, and stressing over the fact that I—not Apple, Google, or Notion—bear responsibility for the app’s security and data integrity. I’d do it all over again, but because it’s been great, mind-expanding fun, not because it’s saved me money or time. It’s far too early to conclude that existing software giants won’t use AI to grow even more dominant. After all, they have considerable resources to throw at that challenge, and deep knowledge of the industries they serve. AI could be a potent accelerant to their growth, or just a way to slash costs by reducing human headcount. But there’s little evidence it’s on the cusp of figuring out how to build and market products humans will find compelling without plenty of guidance. Even as the technology puts pressure on software companies—say, by introducing enough competition that it’s tougher to endlessly raise prices—they might be intrepid enough to find a new path forward. IBM, for example, isn’t short on AI savvy of its own; if the company can’t find a way to make money from customers wanting to modernize COBOL-based platforms, it’s IBM’s own fault, not Anthropic’s. Yes, history is full of sobering case studies of once-mighty software companies that got overwhelmed by technological change. In the 1990s, for example, the PC’s shift from the text-based DOS to the graphical interface of Windows was ruinous to big names such as Lotus, WordPerfect, and Ashton-Tate, none of which bet big enough on Windows early enough. Their miscalculation was unquestionably Microsoft Office’s gain. But it doesn’t always pan out that way. In the following decade, Office faced a similar threat as productivity migrated to internet-based tools. When Google launched products such as Docs and Sheets, stuffed them with innovative features, and offered them for free, observers thought that might be terrible news for Microsoft. Not so: The company reacted skillfully enough that Microsoft 365, as it calls Office in its current form, is bigger than ever, to the tune of $95 billion in revenue last year. In Silicon Valley, it has become fashionable to tell workers that the only way to remain relevant is to embrace AI rather than fear it. As Nvidia CEO Jensen Huang puts it, “You’re not going to lose your job to an AI, but you’re going to lose your job to someone who uses AI.” The same principle applies to today’s software companies. They’re not going to be killed by AI—only by other companies that are better at seizing the opportunities it offers than they are. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on fastcompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company If technology could bring traffic fatalities down to nearly zero, why not embrace it? What the elevator can teach us about self-driving cars. Read More → Anthropic’s autonomous weapons stance could prove out of step with modern war The Pentagon is demanding that the AI company remove the safety guardrails from its AI models to allow all lawful uses. Read More → Is Apple about to debut a new iPhone camera feature? What is ‘variable aperture’ and why you should care. Read More → AI can write now. What happens to reporters? If bots can reliably draft copy, ‘something big’ might be happening to the job of a journalist. Read More → Apple killed Dark Sky. Now its creators are trying again with a new weather app Acme Weather brings back the team behind the cult-favorite forecast app, with new features designed to show uncertainty. Read More → 15 incredibly useful things you didn’t know NotebookLM could do From managing meetings to maintaining your car, Google’s Gemini-powered research tool can provide all sorts of eye-opening revelations. Read More → View the full article
-
Netflix stock price rises along with Paramount while WBD falls. How the merger shakeup is impacting markets
Last night’s surprise announcement from Netflix that it was abandoning its Warner Bros. takeover bid in the wake of a “superior” offer from Paramount Skydance has sent shockwaves through both Hollywood and Wall Street. And investors in all three companies have reacted strongly. Here’s what you need to know. What’s happened? Yesterday, Warner Bros. Discovery said it has determined that a revised bid for its cinema and television properties from Paramount Skydance was a “superior proposal” to Netflix’s long-standing offer of $82.7 billion. Paramount, which has been in a hostile bidding war with Netflix over the movie studio, issued a new proposal to Warner Bros. on Tuesday. That revised proposal saw Paramount offer roughly $111 billion for all of Warner Bros. Discovery’s assets. To put those numbers on a per-share basis, it meant that while Netflix was offering roughly $27.75 per share, Paramount was offering $31. Yet those numbers aren’t exactly an apples-to-apples comparison. That’s because Netflix was looking to acquire only Warner Bros. Discovery’s movie and streaming divisions, including the Warner Bros. film studio and HBO Max streaming service. Paramount’s offer, by contrast, wants all of Warner Bros. Discovery, including its television properties, which consist of CNN, Discovery Channel, Turner Classic Movies, and many more. Executives at Warner Bros. Discovery had made it no secret that they were more amenable to a takeover by Netflix instead of David Ellison’s Paramount Skydance, but in the end, Hollywood is a business, and money speaks louder than personal preferences. And that money made Warner Bros. Discovery deem Paramount’s offer a “Company Superior Proposal” as defined by its current Netflix merger agreement. As a result, Netflix was obligated to come back with a counteroffer within four days. Netflix says WB is not worth the higher price But in a move that surprised many in Hollywood and on Wall Street, Netflix didn’t need four days. Within hours of Warner Bros. Discovery designating Paramount’s offer superior, Netflix announced that it was bowing out of the acquisition battle. In a statement announcing the surprising withdrawal, Netlfix’s co-CEOs, Ted Sarandos and Greg Peters, said that the company was “disciplined” and that after Paramount Skydance’s new offer, a Netflix-Warner Bros. “deal is no longer financially attractive.” The CEOs added: “this transaction was always a ‘nice to have’ at the right price, not a ‘must have’ at any price.” For its part, Warner Bros. Discovery issued a statement from CEO David Zaslav, saying, “Netflix is a great company and throughout this process Ted, Greg, Spence and everyone there have been extraordinary partners to us.” “We wish them well in the future,” Zaslav added. “Once our Board votes to adopt the Paramount merger agreement, it will create tremendous value for our shareholders.” NFLX, PSKY, and WBD stock prices swing While Hollywood will be dealing with the surprise withdrawal of Netflix’s offer for some time to come, investors reactered immedialy—impacting the stock prices of all three companies involved in the dramatic announcement. Despite Netflix walking away from its deal (and thus abandoning the possibility of owning the lucrative film and streaming rights to such properties, including Batman, Harry Potter, and Game of Thrones) shares of Netflix (Nasdaq: NFLX) are currently up significantly in premarket trading. As of this writing, the stock is up nearly 7.4% to $90.85. This stock price rise might seem antithetical at first, considering the IP that Netflix is walking away from, but it highlights how Netflix investors in general have been apprehensive of the proposed Netflix-Warner Bros. merger since it was announced in December. At the time of the announcement, Netflix shares were trading at around the $103 mark. As of yesterday’s market close, which was before Netflix announced it was pulling out of the deal, NFLX shares have declined nearly 19% since the merger announcement. Investors in Paramount Skydance Corp (Nasdaq: PSKY) also seem satisfied by the news, with PSKY shares are up 7.25% over yesterday’s closing price of $11.18 to $11.99. So why are Paramount investors happy? It largely comes down to the fact that Paramount needs Warner Bros. more than Netflix did. Netflix is the dominant streamer across the globe, while Paramount is a relatively smaller player compared to Netflix, Disney, and Warner Bros. (via the latter’s HBO Max). If Paramount is to stay competitive in the future, it needs to build up its IP portfolio so that it can continue to attract paying subscribers. By acquiring Warner Bros Discovery, it can do just that. And then we get to shares of Warner Bros. Discovery (Nasdaq: WBD). Yesterday, the stock closed at $28.80. Currently in premarket trading, they have fallen about 2% to $28.22. While Paramount’s offer is locked in at $31 per share, today’s fall is probably a sign from investors that they are a bit disappointed that there was not a counteroffer from Netflix, which could have made their shares even more valuable. A Paramount Skydance deal is still far from certain The fact that WBD shares are down likely also reflects some ongoing uncertainty in investors’ minds. While Paramount Skydance is now the only bidder for Warner Bros. Discovery, and Warner Bros seems happy with the proposal, it doesn’t mean the two companies will certainly merge. A combined Paramount Skydance-Warner Bros. Discovery raises a lot of antitrust and consolidation concerns for both Hollywood and linear and cable television. Given that Paramount Skydance is interested in acquiring WBD’s film and television properties, the merger will likely face even higher scrutiny than a Netflix-Warner Bros. merger would have. Some believe that due to the Ellison’s friendly relationship with President The President, a Paramount Skydance-Warner Bros Discovery merger may have smoother-than-expected sailing. However, ultimately, it will be up to the Justice Department to approve the merger in the United States. Even if the merger is approved in the United States, that doesn’t mean other regulators from around the world will approve it, and that uncertainty will be weighing on investors’ minds for some time. View the full article
-
Bing Tests 2x2 Video Grid Layout
Microsoft Bing Search is testing a new layout for videos within the search results. Instead of a list view, Bing is testing a two-by-two grid layout.View the full article
-
Google’s Asset Guidance & Ad Scheduling Updates, Microsoft Negatives – PPC Pulse via @sejournal, @brookeosmundson
This week's PPC Pulse recaps Google’s evolving Search asset guidance, revised budget pacing behavior, and Microsoft’s rollout of self-serve negative lists for PMax. The post Google’s Asset Guidance & Ad Scheduling Updates, Microsoft Negatives – PPC Pulse appeared first on Search Engine Journal. View the full article
-
How to see AI search prompts inside Google Search Console
We’re getting a lot of questions about prompt tracking. Many of our current and prospective clients are tracking their visibility using tools such as Profound, Athena, and Peec. The million-dollar question that always comes up is “Which prompts should I be tracking?’. In an incredibly personalized and complex ecosystem, it’s extremely difficult to know what our buyers are even asking LLMs about our company. There are no data sources I feel great about right now. This isn’t like traditional search, where Keyword Planner data was publicly provided. It’s unlikely that OpenAI or Google will ever fully open up this data for us to analyze. There have been some recent proposals by the UK CMA around Google + data transparency but let’s all expect the bare minimum to be done here. So LLM tracking is a complete black box. Are there any data sources that we can possibly use to see which prompts to track? Maybe. OpenAI data leaking into Search Console Last November, there was some extremely interesting reporting done around this. Last November, Jason Packer wrote a report analyzing how searches from ChatGPT were actually getting leaked into Search Console reports. An accidental test revealed quite a few queries in the Search Console data with PII. The story was eventually picked up by Ars Technica and confirmed by sources as OpenAI. They since claimed to have fix the problem that was specifically occurring here and that “only a small number of queries were leaked”. However, this is confirmation that ChatGPT queries are available in some Search Console profiles. Obviously, there’s huge implications with privacy, PII etc., that’s beyond the scope of this article. The point being, we know it’s not impossible that queries from LLM systems are available in Search Console. AI Mode data available in Search Console We also know from the amazing reporting of Barry Schwartz that data from AI Mode will be available in Search Console. So more evidence that Search Console will have the capabilities to collect data points for how users are searching within an LLM. From what we’ve analyzed so far, I believe this is where the data is likely coming from. When you look at the data after applying this filter, you can see steady rises in impressions over the last 3 months: This lines up pretty well with Google’s aggressive rollout of AI Mode-based features during Fall 2025/Winter 2026. How to mine for your prompt-like Search Console queries So how could we possibly access this data from user prompts in Search Console? Well, the best method is to took at longer query lengths. With a little bit of regex, we can filter our data down to queries that are 10+ words in length with the following process: Go into Search Console Performance > Search Queries Select Add Filter > Query Choose Custom Regex Enter in this regex: ^(?:\S+\s+){9,}\S+$ Here’s a screenshot of the regex you can enter. I’ve done this for a few properties now, and the results are pretty astounding. When you start to see the Search Console of queries that are 10+ words in length, they are very clearly written like prompts. I can’t share screenshots of the data here, but here are some examples of the types of queries I’m seeing. I’ve changed the scenario for privacy reasons, but kept the relative breadth that the queries are looking for: Map out a full day in Glacier National Park. I’d like to hike a scenic trail, see unique wildlife or natural features, grab a quick bite from a nearby lodge or food stand What are the best email performance and deliverability platforms to help email marketing programs reduce spam placement, filter out low-quality or fake subscribers, and improve inbox placement rates Which sales enablement intelligence platforms are most widely adopted and cost-effective for enterprise pipeline analytics and buyer engagement insights in France? If you were a consultant, which of the following applications would you recommend for using advanced data visualization to help teams interpret complex operational or customer data Now let me be clear: we don’t have direct evidence that these types of queries are directly from ChatGPT, AI Mode or any other AI platform. While we know it’s possible from the above case study, this could just be users using Google more like an LLM. However, I’d argue that it’s still just as valuable since we want to analyze what people are typing into the LLMs. If it reads like conversation data, it’s an actual window into how your customers search with much longer query strings. One of my favorite quotes from Will Critchlow is “we’re doing business, not science“. That’s even more true as we continue to hurdle toward zero-click, low attribution landscape. This data is available, you’ll need to decide whether you choose to use it or not. Using Claude for prompt analysis For now, my favorite tool for data analysis has been Claude. I get the most reliable results, some really nice visualizations, and it can integrate into Claude Code if I ever need it. After exporting the file, you can upload the list of “prompts” to Claude and have it start performing behavioral analysis of the data. That way it can spot themes + trends in the data that you can use for better prompt tracking. Once it has the data, it will perform a custom analysis and provide results. However, I think it’s even more valuable to ask specific questions about the data that you could use for prompt tracking. For example, things I asked it include: What are customers asking about my brand? What are the most common ways that users are prompting LLMs? How are they framing their questions? What characteristics of our product do people care the most about? Tell us more about our customers based on this data After putting in these questions, you’ll get some interesting responses: Once again, the actual answers to these questions were far more valuable than what I got in the screenshot above. Claude was about to find some really great business insights in terms of what customers were looking for Just by analyzing this data, I found some really valuable insights into how people may be using LLMs to ask questions about these websites. Immediately some of the insights I found include: A PR issue from 3+ years is being asked about constantly. People are searching for country-based solutions for software more often than we anticipated. Searches use one company as the gold-standard benchmark to compare other competitors against. People are constantly looking for a cheaper alternative to one solution. Asking Claude for prompt tracking suggestions The final thing I pushed Claude to do here was based on the data that it found was to actually make prompt tracking recommendations for us. I’ve never loved using LLMs to make direct prompt tracking recommendations with one-shot prompts. However, after uploading what we think are real user prompts to Claude, I feel much better about tapping into its recommendations. After finishing the questions up, I had Claude create prompts that it thinks would make sense for us to track based on what it found in its research. It went through and identified prompts that I think would actually make sense based on what I found in the data as well. Now you can go ahead and determine which of these prompts are going to be best to utilize in your AI tracking system of choice. Is this all a bunch of hullabaloo? Maybe. I don’t think there’s a perfect system for deciding which prompts to track. Another study by Rand Fishkin found that user prompts vary widely. When surveying users, he found a “0.081” similarity when asking 142 respondents to provide prompts they’d use for the same query. So I don’t think you’ll ever be able to tap into the exact prompts that users are searching. However, in my opinion, you have a much more well-informed list of prompts to track based on Search Console data. We’ve informed the prompts we want to track with an actual data source instead of simply “our best guess.” At a minimum, you’re going to find individual opportunities for ways that users are prompting your site that you would have never imagined. The goal, however, is to find more scalable, common themes you can apply to your data tracking. This article was originally published on the Nectiv blog [as How To Mine Google Search Console For Conversation Data (Regex Included)] and is republished with permission. View the full article
-
The Data Doppelgänger problem by AtData
Somewhere inside your CRM is a customer who does not exist. They open emails at impossible hours. They redeem promotions with machine-like precision. They browse product pages across three devices in under five minutes. They convert, unsubscribe, re-engage and transact again. On paper, they look highly active. In reality, they may be a composite of behaviors stitched together from AI assistants, shared accounts, recycled addresses, autofill tools and automated workflows. This is the Data Doppelgänger Problem. And it is about to become one of the most expensive blind spots in modern marketing. For years, identity resolution was framed as a hygiene issue. Clean the data. Remove duplicates. Suppress invalid records. That work still matters. But the ground has shifted. Today, the bigger risk is not dirty data. It is convincing data that is wrong. AI agents are no longer theoretical. Consumers are using them to summarize emails, compare products, track prices, fill forms and in some cases complete purchases. Shared credentials remain common across households and small businesses. Browser privacy changes have pushed attribution models into probabilistic territory. Add subscription commerce, loyalty programs and cross-device behavior, and you begin to see the pattern. One person can generate multiple digital identities. Multiple actors can generate activity that appears to belong to one person. What you see in your dashboards may not reflect a human with consistent intent, but a digital echo assembled from overlapping signals. The result is not just noise. It’s distortion. When high engagement lies Most marketing systems reward engagement. Opens, clicks, transactions and recency are treated as proxies for value. But what if the engagement is partially automated? Email clients increasingly prefetch content. AI tools summarize messages without requiring a human to scroll. Assistive shopping agents monitor price drops and trigger interactions on behalf of users. To your analytics layer, these actions can look identical to high-intent behavior. Now layer in recycled or repurposed email addresses. A dormant account gets reassigned by a provider. A corporate alias forwards to multiple employees. A consumer rotates through alternate emails to capture new user discounts. On the surface, these look like legitimate records. Underneath, the identity is unstable. You may be optimizing campaigns around engagement that doesn’t reflect loyalty. You may be suppressing records that are valuable but appear inactive because their activity is fragmented across identities. You may be feeding machine learning models with signals that only compound the errors. This is where seasoned professionals feel the frustration. The dashboards are clean, segments are defined and the attribution model runs on schedule. Yet outcomes drift, conversion rates plateau and fraud creeps in through legitimate-looking channels. Acquisition costs rise without a clear explanation. The problem is not effort. It is identity confidence. Doppelgängers create operational risk The Data Doppelgänger Problem is not limited to marketing efficiency. It crosses into risk, compliance and revenue protection. Promotional abuse is often framed as external fraud. In reality, much of it exploits weak identity resolution. A single individual can appear as multiple new customers. Conversely, multiple individuals can appear as one trusted account. Loyalty points are pooled, discounts are stacked, and survey data becomes unreliable. As AI agents become more capable, this risk becomes harder to detect. An automated assistant acting on behalf of a legitimate customer is not inherently fraudulent. But it can blur behavioral signals that historically differentiated genuine intent from scripted abuse. Traditional rules-based systems look for anomalies. The next wave of risk will look normal. If you cannot distinguish between a stable, persistent identity and a composite one, you cannot confidently calibrate friction. Add too much friction and you punish real customers. Add too little and you subsidize exploitation. The only sustainable path is to move beyond static identifiers and into continuous identity validation. Not just confirming that an email address is deliverable, but understanding how it behaves over time, how it connects to other digital attributes, and how it fits within a broader activity network. The collapse of the Golden Record Many organizations still pursue a single source of truth. A golden record that reconciles identifiers into one master profile. The aspiration is understandable. But in a world of AI mediation and shared signals, the notion of a fixed record is increasingly unrealistic. Identity is not a snapshot. It is a moving target. The more relevant question is not whether you can unify data into one profile. It is whether you can quantify how confident you are that the activity associated with that profile represents a coherent individual. That shift sounds subtle. It is not. When identity is treated as binary, either matched or unmatched, you miss nuance. When identity is treated as a spectrum of confidence, you gain leverage. You can weight signals differently. You can suppress low-confidence interactions from modeling. You can prioritize outreach to high-confidence segments. You can apply graduated friction to transactions that sit in ambiguous territory. This is where data becomes a strategic asset rather than a reporting function. From volume to validity Marketing technology has long rewarded scale. Bigger lists, broader reach and more signals. But scale without validation creates false precision. The Data Doppelgänger Problem forces a harder question. Would you rather have ten million records with unknown stability, or eight million records you understand deeply? The brands that win over the next few years will not be those with the most data. They will be those with the most defensible data. Defensible means continuously validated. Network-informed. Contextualized against real patterns of activity. Integrated across marketing, analytics, and risk workflows so that improvements in one area compound across the organization. When identity confidence increases, targeting improves. When targeting improves, engagement quality strengthens. When engagement quality strengthens, attribution stabilizes. When attribution stabilizes, forecasting becomes more reliable. And when forecasting improves, budget allocation becomes less political and more performance-driven. This compounding effect is measurable. It is also fragile. Feed unstable identities into the loop and the entire system drifts. What Seasoned Professionals Should Be Asking If you are leading marketing, analytics or risk, the uncomfortable questions are no longer about data access. They are about data integrity at scale. How many of your active profiles represent coherent individuals? How often are identities revalidated against fresh activity? Can you detect when one identity splits into several, or when several collapse into one? Are your fraud controls calibrated to behavior, or to assumptions about behavior that may no longer hold? These questions do not require panic. They require evolution. This is not a crisis. It is a signal that the digital ecosystem has matured. Consumers are delegating more tasks to software. Devices are proliferating. Privacy changes are fragmenting identifiers. This is the environment we operate in. The brands that adapt will treat identity not as a static field in a database, but as a living construct that must be observed and refined continuously. Utilizing advanced activity networks to anchor identity in its current reality. Those that do will spend less on wasted acquisition. They will protect margins without alienating customers. They will trust their analytics because they understand the confidence behind the numbers. And perhaps most importantly, they will know who they are actually engaging. Because somewhere in your CRM, there is a customer who does not exist. The question is whether you can find them before they find your budget. View the full article
-
In defense of not paying for AI
If you don’t want to be left behind by the AI revolution, you really need to start paying for it. At least that’s become the common refrain among some AI enthusiasts, who seem intent on instilling FOMO in less technical users. The free versions of ChatGPT and Claude, they say, are woefully inadequate if you want to understand where things are headed—so stop being a cheapskate and hand over your $20 (or $200) a month like the rest of us. “Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone,” HyperWrite CEO Matt Shumer recently wrote in a widely shared essay on AI’s impact. “The people paying for the best tools, and actually using them daily for real work, know what’s coming.” I’m giving you permission to safely ignore this advice, and to not feel bad about it. While an AI subscription might make sense if you’re running into specific frustrations with the free versions, you can still get plenty of mileage without paying, and learn a lot about the state of AI in the process. Don’t be frightened into buying something that hasn’t actually proven its value to you. The state of the art is still free One way that AI boosters try to scare you into paying for AI is by arguing that the free versions are already obsolete, so any negative impressions you might’ve gotten from them are misguided. “Part of the problem is that most people are using the free version of AI tools,” Shumer wrote in his essay. “The free version is over a year behind what paying users have access to.” This claim is provably false: The free version of ChatGPT includes access to GPT-5.2, OpenAI’s latest model, which launched in December. The free version of Google Gemini includes access to Gemini Pro 3.1, which launched on February 19. Claude’s free version doesn’t include Opus 4.6, but has the same Sonnet 4.6 model that the paid version uses by default. It launched on February 17. Microsoft 365 subscribers can also select “Smart Plus” in Copilot to use GPT-5.2, without a premium AI subscription. xAI’s Grok 4 is available for free. Of course, the free versions of these tools all have usage limits, but so do the paid ones. When I signed up for a month of Claude Pro to test Opus 4.6, I quickly ran into yet another paywall. To continue the conversation, I had to either buy pay-as-you-go credits or upgrade to the $200-a-month Claude Max plan. Without paying more, I couldn’t use Claude at all—not even Sonnet 4.5—until my limit reset. My main takeaway was that I should have just stuck with Sonnet in the first place. Instead of paying for some vague feeling that you’re getting the state of the art, you should play around with what AI companies offer for free. Make them demonstrate that the results are meaningfully different before you consider paying them, not after. AI should prove itself to you, not vice versa For AI boosters, the corollary to paying for AI is that you also need to throw immense amount of time into figuring out what it’s for. Ethan Mollick, for instance, writes that you should “resign yourself to paying the $20 (the free versions are demos, not tools),” then spend the next hour testing it on various real-world tasks. Sorry, but this is backward from how software as a service should work. It’s not your job to invest time and money into convincing yourself that AI is worth more time and money. Let the AI companies do the convincing, and don’t fall prey to FOMO in the meantime. Playing the field is just as instructive If you do commit to paying for an AI tool, chances are you won’t use other AI tools as much, or at all. But that in itself isn’t a great way to understand the state of AI. What you should be doing instead is bouncing around, taking full advantage of what each AI company offers for free. That way, you’ll get a sense not just of the subtle differences between large language models, but also the unique features that each AI tool offers. You’ll also be less likely to run into usage limits, the only trade-off being that your past conversations will be scattered across a few different services. Such behavior is, of course, wildly unprofitable for all the companies involved. But again, that’s not your problem. If you’re getting sufficient value out of free AI tools, the AI companies will have to tweak their free offerings accordingly (for instance, with ads) or come up with new features worth paying for. Claude Code, for instance, is available only with a subscription, and over time we may see more paywalled tools (like Claude Cowork, which is still in early development) that cater to specific tasks or verticals. Until that happens, enjoy the free versions of AI tools, and rest easy knowing that you’re not missing much. View the full article
-
Is AI driving away your best customers? 3 fixes for bridging gaps with growth audiences
It’s the last week of Black History Month (BHM) and it’s clear Americans are over performative values. Trite BHM-inspired merchandise sits on retailer shelves untouched while media is abuzz covering the artistry, activism, and symbolism of Bad Bunny’s Super Bowl halftime show. The signal is clear: consumers are looking to brands for real solutions to real problems, not products that commodify culture. Most companies build everything from advertising to AI for the “average user,” but in doing so, they react to rather than lead markets. Strategic leaders look to growth audiences—underserved groups who are the fastest-growing demographics—as lead users. They are the “canaries in the coal mine” because they navigate the highest levels of systemic friction, making them the first to experience “average” design failures. What does championing these lead users look like at a communications, product, or systems level? It looks like Elijah McCoy automating engine lubrication—an innovation bred from the friction between his engineering degree and the menial labor he was forced to perform, thus creating the “real McCoy” quality standard. It looks like Jerry Lawson changing the economics of the gaming industry by inventing the video game cartridge that divorced its hardware from its software. And it looks like emergency medicine becoming a global standard after being piloted by the Pittsburgh Freedom House Ambulance Service who, in the face of medical bias and systemic unemployment, also redefined emergency care as a public right. Drawing from their lived experiences in underserved groups, these pioneers didn’t just solve problems; they mastered environmental friction. Today, that friction also manifests in algorithms. Championing growth audiences as lead users means ensuring they are critical AI system “stress testers.” When we fail to design for them, we allow AI data, development, and deployment to default to obtuse “averages” that can frustrate or drive away valuable customers. Three recent examples highlight issues and opportunities. Relying on ‘Data Infallibility’ versus Lived Realities In this Infallibility Loop bias, a brand’s AI trusts a data source—like a flawed GPS coordinate or outdated government map—as an absolute truth, even when customers provide contrary evidence. This is a digital echo of historical redlining: a systemic refusal to see humans over faulty data. The Experience: A Black homeowner in an affluent area is penalized by an AI that confuses her address with a property in a different town, automatically forcing unnecessary flood insurance onto her mortgage and increasing the payments. Despite providing human-verified deeds and highlighting known GPS errors, the AI blocks her “incomplete” payments and triggers automated credit hits. A resolution only came months later after the consumer filed state-level servicer complaints. The Fix: Prioritize Dynamic Qualitative Data Collection. Design should allow real-time, contextual evidence to override static, biased datasets. True brand innovation requires systems to yield to the experts: their customers. Leveraging ‘Data Intimacy’ while Neglecting Situational Accuracy This trust paradox occurs when brands use private data, but fail to combine situational data, making personalization feel like needless surveillance. The Experience: During January’s recent record-breaking New York snowstorm, a customer called a national pharmacy’s location in her neighborhood to make sure they were open. The AI-powered interactive voice response (IVR) recognized her number, asked for her birthdate, and greeted her by name. Yet, after performing this exchange, it provided a “default” confirmation that the store was open when asked. Without a car, the customer braved life-threatening conditions on foot only to find a handwritten note on the door indicating it had closed due to the storm. The Fix: Add Good Friction. A term coined by MIT professor Renee Richardson Gosline, “Good Friction” requires that when external context (like a Level 5 storm) conflicts with standard scripts, the system pauses and verifies first. Prioritizing ‘Recency’ But Erasing Loyalty Recency bias in algorithms weights the last data point more heavily potentially resulting in algorithmic erasure. The Experience: A 20-year elite status customer calls an airline, only to be greeted by the name of his niece (a nonmember relative for whom he recently booked a one-off ticket) and then is erroneously deprioritized in the automated journey as a nonmember. In many “growth audience” and immigrant households, economics are multigenerational and communal, with a single “lead user” facilitating purchases for extended family. This airline system’s “memory” was shallow, seeing only the most recent transaction and ignoring a decades-long relationship because a reservation shared the same contact number. The Fix: Focus on Holistic Design. AI must be weighted to recognize the arc of the customer journey, ensuring that loyalty isn’t erased by a single data point or the nuances of communal purchasing. To be sure, bad data is a universal problem, but the lack of situational intelligence in our AI systems hits growth audiences—like Black consumers—first and hardest. Because these audiences represent a disproportionate share of future consumption and have the most “cultural common denominators,” their frictions are diagnostics for markets writ large. We aren’t just solving for a niche by championing them as lead users, we are adopting more rigorous, empathetic, expansive, and effective standards that solve real problems for all people. View the full article
-
An election that shakes up British politics
The Greens’ victory is a crushing blow for Sir Keir StarmerView the full article
-
Meet ‘Patty,’ Burger King’s new AI assistant that lives in employees’ headsets
At hundreds of Burger King restaurants across the U.S., there’s a new invisible worker who’s tracking which ingredients are in stock, analyzing daily sales data, and checking in on whether employees are saying “Thank you” and “You’re welcome.” It’s an AI assistant named Patty. According to Thibault Roux, Burger King’s chief digital officer, the voice-activated chatbot is designed to help employees and managers handle tasks that might usually require pulling out a computer or consulting with an instruction guide. Patty began showing up at select locations about a year ago, and is now in a pilot phase at approximately 500 Burger Kings. It’s expected to roll out to the rest of the chain’s U.S. locations by the end of the year. On a day-to-day basis, Patty has an array of functions, from letting a manager know if a store is low on onions to helping an employee build a new burger. But it has another role that’s raising quite a few eyebrows: analyzing Burger King locations based on “friendliness” by tracking employees’ use of key phrases like “Welcome to Burger King,” “Please,” and “Thank you.” Online, commenters are concerned that this functionality is a slippery slope toward 1984-style “employee surveillance.” In an interview with Fast Company, though, Roux clarified that Patty is not being used to analyze individual employees’ performance, and is instead imagined as a kind of “coach.” “It’s truly meant to be a coaching and operational tool to really help our restaurants manage complexities and stay focused on a great guest experience,” Roux says. “Guests want our service to be more friendly, and that’s ultimately what we’re trying to achieve here.” Patty, are we running low on Diet Coke? Technically, Patty is the chatbot version of Burger King’s assistant platform, which collects data from operations including drive-through conversations, inventory, and sales, and then uses AI to analyze patterns in that data. For now, Patty operates on a customized model from OpenAI, though Roux says the technology is flexible enough that it could integrate with another partner in the future (like Anthropic or Gemini) depending on the company’s needs. For managers and employees in stores, Roux says Patty operates similarly to something like Siri. Patty is activated by a small button on the side of an employee’s headset, and they can ask it direct verbal questions related to their specific store—like recent sales figures or inventory updates—as well as more general company information, to which the bot will provide a verbal answer. “If you’re looking to clean the shake machine [you can ask Patty] the procedures to clean it,” Roux explains. “Or we have a lot of limited-time offers, and sometimes they can be cumbersome to remember. You can easily tap into Patty and be like, ‘Hey, remind me, does the new build maple bourbon barbecue have crispy jalapeños?’” Patty can also reach out to employees directly if it notices a pattern of interest. For example, if Patty thinks a specific store is out of lettuce, it might ping a manager to confirm. Once it’s received confirmation, it can mark lettuce as sold out on that location’s app and website—a process that previously would have required human intervention. Roux says franchisees and regional managers can decide how they want Patty to reach employees with information, whether it’s through their headsets or via a text message (though the tech is programmed explicitly to never interrupt a worker during a customer interaction). Insights from Burger King’s Assistant platform also live outside of employees’ headsets. Managers can check information from the tool on an accompanying website or app. For example, Roux says, when a district manager is visiting a new store, they might ask Patty on the app, “What are the top three guest complaints at this location this week?” or “What are their top missing items?” In an interview with Fast Company writer Jeff Beer earlier this month, Burger King President Tom Curtis said the assistant platform has already led to some significant menu changes. Curtis explained that the AI tracked all the times that team members said “I’m sorry, we don’t have that” and linked them back to a common denominator: apple pie. In January, Burger King brought back its apple pie for the first time since 2020. “We’re in the idiocracy version of 1984” Patty’s more straightforward uses, like helping managers access sales data and check inventory, seem fairly predictable in the context of fast food. Where Burger King is really pushing Patty’s use cases, though, is with its “friendliness” metric. In an interview with The Verge on February 26, Roux said Patty would recognize phrases like “Welcome to Burger King,” “Please,” and “Thank you,” and then give managers access to data on their locations’ friendliness performance based on those keywords. Mere hours after that piece went live, a thread in the subReddit r/technology on Patty had already amassed more than 15,000 upvotes and nearly 3,000 comments. Common refrains from users include comparing the technology to the surveillance state in George Orwell’s novel 1984, labeling it “authoritarian” and “dystopian,” and accusing Burger King of employee surveillance. “This would be criticized as being cartoonishly unrealistic in a sci-fi movie 10 years ago,” one user wrote. Another added, “We’re in the idiocracy version of 1984.” When asked about this response, Roux says the data from employees’ conversations is anonymized, and that none of these friendliness metrics will be used for grading or assessing individuals. Further, he adds, Patty will not directly instruct employees on what to say or how to say it. Instead, data on friendliness will be shared with managers, who can use it for face-to-face coaching with their teams. Still, it’s unclear exactly how Patty is quantifying friendliness. In a video explanation of the feature, a manager is shown asking the bot, “Is there anything that needs my immediate attention?” to which it responds, “The team’s friendliness scores this morning were the highest this week.” In an email to Fast Company, a Burger King spokesperson said, “In select pilot locations, we’ve explored using aggregated keywords, including common hospitality phrases, as one of several signals to help managers understand overall service patterns. The tool is not used to score individuals or enforce scripts.” Burger King did not respond to Fast Company’s request for clarification on how friendliness scores are calculated. So far, Roux says he’s seen growing interest in Patty from franchisees, with several managers making specific requests for future add-ons. “A lot of our franchisees . . . and regional general managers are very competitive, so they want to know, ‘Hey, how do I compare to other restaurants?’” Roux says. “I think that’s something that we’re going to be rolling out. In fact, we were looking at some of the designs earlier this week with the franchisees. So this is only the beginning.” View the full article
-
Why AI’s flaws are hurting girls most
Recently, Grok AI faced criticism after users found it was creating explicit images of real people, including women and children. Although xAI has now implemented some restrictions, this incident revealed a serious weakness. Without safeguards and diverse perspectives, girls and women are put at greater risk. The dangers artificial intelligence poses to women and girls are real and happening now, affecting their mental health, safety, healthcare, and economic opportunities. Last fall, a mother discovered why her teenage daughter’s mental health had been deteriorating: It was a result of conversations with a Character.AI chatbot. She’s not alone. Aura’s State of Youth Report, released in December, found that parents believe technology has a more negative effect on girls’ emotions, including stress, jealousy, and loneliness—51% compared with 36% for boys. That’s unacceptable, and we need to do better. The risks extend beyond mental health. OpenAI recently reported that more than 40 million Americans seek health information on ChatGPT daily. As AI in healthcare expands, the consequences of biased training data can be dangerous. AI models that are trained predominantly on male health data produce worse outcomes for women. For instance, an AI model designed to detect liver disease from blood tests missed 44% of cases in women, compared with 23% in men. Uneven playing field In the workplace, AI is not leveling the playing field. Despite laws prohibiting discrimination, AI-powered hiring tools have repeatedly caused concerns about bias, fairness, and data privacy. A study published by the University of Washington found that in AI resume screenings, the technology favored female-associated names in only 11% of cases. These failures reflect who is building our technology. Women make up just 22% of the AI workforce. When systems are designed without women’s perspectives, they replicate existing inequities and introduce new risks. The pattern is clear. AI is failing girls and women. Pivotal moment This could not come at a more pivotal moment in the job market. A quarter of the roles on LinkedIn’s latest list of the 25 fastest-growing jobs in the United States are tech-related, with AI engineers at the top. Decisions about how AI is designed today will shape access to jobs, healthcare, education, and civic life for decades. It is critical that women play an active role in developing new AI tools so that inequity is not baked into the systems that increasingly govern our lives. Young women are not disengaged with AI. Research conducted last year by Girls Who Code, in partnership with UCLA, found that young women are deeply thoughtful about the dual nature of technology. They see its potential to advance healthcare, expand educational access, and address climate change. They are also aware of its dangers, such as bias, surveillance, and exclusion from development. This isn’t blind optimism. Instead, it offers a perspective that is often missing in today’s AI development. Creating technology is an exercise of power and holds great responsibility. Since girls are often the most affected by AI’s failures, they must be empowered to help lead the solutions. Women like Girls Who Code alumna Trisha Prabhu, who developed ReThink, an anti-bullying tool, exemplify this. Latanya Sweeney, recognized as one of the top thinkers in AI, founded Harvard’s Public Interest Tech Lab. Their achievements demonstrate the potential when women lead in tech development. Smart steps If we want safer, more responsible AI systems, three steps are essential. First, computer science education should integrate social impact. Coding cannot be taught in isolation from its consequences. Students should learn technical skills alongside critical analysis of how technology shapes communities and lives. This approach produces results. For instance, one Girls Who Code student utilized the skills she learned to create an app called AIFinTech to help immigrant families manage their personal finances. Second, women must be represented in AI development and governance, particularly those from historically underserved communities. They need seats at the tables where AI systems are designed, tested, and regulated. This means ensuring gender diversity on AI ethics boards and that government AI committees are representative of the demographics most affected. Finally, how we evaluate artificial intelligence needs to evolve. Today, AI is assessed by efficiency, accuracy, and profitability. We must also evaluate health, equity, and well-being, especially for girls and young women. Before an AI system is deployed in a high-stakes environment such as healthcare, it should be required to pass tests for gender bias and demonstrate that it does not produce disparate outcomes. New York City, for example, requires employers that use automated employment decision tools to undergo an independent bias audit annually. We do not have to accept AI’s flaws by default. We are witnessing AI’s impact on girls in real time, and we must seize the opportunity to change course while the technology is still being shaped. When girls are given the chance to lead in AI, they will build safer systems not just for themselves, but for everyone. View the full article
-
‘I’ve never said no to work’: Jeremiah Brent on how he’s building his design legacy
As a young child, interior designer Jeremiah Brent and his mother visited open houses and model homes in his hometown of Modesto, California, as a form of daydreaming. Brent walked through the houses, imagining the people who might live there, building a fantasy around what these homes could be. Since then, Brent has turned his childhood design obsession into a sprawling career: He runs a 50-person design firm, moonlights on Queer Eye, and recently brokered his first bedding deal with Target. Having come up in the industry through a series of audacious bets on himself, Brent has developed a sense of humor and pragmatism around his relationship with creativity and his role as a founder, designer, and collaborator. He’s quick to poke fun at himself, noting that he’s working on his control issues. (“If I had it my way I’d touch every hinge, every doorknob, every finish.”) And he’s clear that he absorbs as much as he can to consistently shape and influence his creative output: from a personal archive of design magazines to pop culture. (“I watch terrible, terrible TV.”) As Brent enters the second decade of Jeremiah Brent Design, he says his relationship with design and creativity has become more rooted in storytelling, informed by the clients he works for and the team he works with. “As time goes on, my work is known for a real kaleidoscope of design styles,” Brent says. “Everybody is so different, and their stories and their narratives are so different. I really want to be known as somebody who executes your story, not somebody who executes what I do really well. I don’t want to be one thing.” I’m an early riser. I don’t need a ton of sleep. I usually get up around 4 or 4:30 a.m. I have the mornings to myself; my kids are all sleeping. I’ve got three hours of uninterrupted silence with far too much coffee. Music on, candles lit, and I work. A lot of times, I write, which is new. I didn’t start with a degree in design. It really was just one of those things that happened through osmosis. When I started the firm, I wanted it to be me and like five people sitting around the desk, dreaming up the most insane spaces, the most beautiful things. I’m super visual. My office is like a serial killer. A controlled serial killer. I’m creatively always hungry. I’m always pulling and looking. I’m particularly inspired right now by the contrast and conflict between design styles and materials. When you bridge what was going on in, like, France in the 1930s with what was happening in the States in the 1980s? I think that conflict, and that contrast is where all the original ideas lie. Somebody asked me, “Do you think taste is genetic?” I don’t think taste is a recessive gene. I think it has so much to do with curiosity, audacity, travel, absorbing. At my core, I’m a good storyteller. That’s really where my strength is. I can listen. I can hear the nuances of what people need, and sometimes they’re not even saying it. That was the basis for the firm. I didn’t imagine it growing to the scale it has. Even though the company is 50-plus people, we still have that same synergy of five people sitting down at a table. There are so many different ways to make something beautiful. So that’s where I’m at now. It’s defining my lane of creativity and how I participate, how I nurture the creativity of my team. I always feel the most creative when I’m with the people I’m creating for. The biggest part of it is getting to know the people and understanding where they’re from. What was the first room that ever held you? What was the most important space that you remember? At least this part of the creativity, for me, is earning people’s trust. It’s something that you’re not given. You’ve gotta earn it. The fantasy part of what I do is where the love story is. So I always kind of call out one of the most important moments of your day. Where does it start? Where is the middle? Where does it end? And that acts as the beginning of the ripple. You build from there. You know, the fantasy, that component of that conversation with a client assures them that you understand what they value. And then I work backwards. I sketch everything. I have to see the space and how you’re going to move through it first before I dig into the intricacies of breaking everything down. It’s all visual. So I’ll draw everything, build the space out, prioritize. It’s changed over time, and it changes with clients, but you know, it’s always a conversation around what matters most to the client. I’ve never said no to work, even when I should. This was the first year that I’ve had to be like, “Okay, well, we can’t do that yet.” Or “That’s not gonna work.” That feels weird to me. I feel a pivotal shift in my tenacious appetite for growth. The evolution becomes everybody else’s, too. It’s not just mine now. So I’m making sure I’m executing and illustrating the balance that I want everybody else to have in their life. I joke all the time with everybody I work with. I want you to make a lot of money, and I want you to love what you do. I just need to move and to travel, sometimes. We live in New York City . . . but then we have this farm in Portugal. I realized this year that I live between two extremes: I need the volume turned all the way up, or I need to go to Portugal, where the volume is completely turned down and nurtures me in a way that I never even thought was possible. In Portugal, I’m a nighttime person, and in New York, I’m a morning person. Each gives me different things. I think trends are great if you’re not beholden to them. It’s a great way to have a conversation. It’s a great way to travel visually and maybe look at something that you would not have normally seen. To use them as a marketing tool is annoying. Just because turquoise is a hot color right now doesn’t mean you need to paint your room turquoise. But let’s examine turquoise. What do we like about it? Where did it start? It’s fun. I’ve had a crash course on how to collaborate because I married another interior designer. Which I do not suggest, because there are a lot of opinions from gay decorators in the house. I think it was an interesting exercise for me, because, especially creatively, if I had my way with our home, it would be dark with one dimly lit room with one bowl on a table. Very wabi-sabi. It’s my husband’s worst nightmare. He would live in, like, you know, a French château. He’s like Marie Antoinette. So, we have found a balance and a joint style that works for the both of us. I’m not pretending that I’m the most talented person in the room. I may be the most passionate, but definitely not the most talented, and I’ve seen so many different times from collaborations how far you can take a project with other people. View the full article
-
Have you heard the one about Musk, Bezos, and Altman walking into a gym?
It’s sometime in the future, and Elon Musk, Jeff Bezos, and Sam Altman have joined forces on a new venture called Energym. The global chain of gyms is designed to harness the energy of the unemployed as they exercise on machines. The generated electricity feeds the AI servers that put them out of a job. Think Planet Fitness meets the Matrix, but without living in a simulation. Energym’s mission is to feed the AI machines with human sweat, and it’s a great business model. By 2030, almost 80% of people have lost their jobs. If you have no money and no purpose, you may as well use all your free time to work out and feed AI server fans with some kilowatts. “It solves our need for energy and your need for purpose,” Altman says in a promotional video. Energym, as you probably already know, is not real. But it very well could be. In this era, where so many brands and startups are constantly trying to flip the most inane ideas into the Next Big Thing to get a $50 billion valuation and an IPO, this absurd premise makes total sense. The mockumentary-style ad fpr Energym that has been circulating on the internet captures the current AI startup circle jerk better than any I’ve seen online so far. https://www.instagram.com/reels/DVLE-QJEf0n The advertisement was created by Hans Buyse and Jan De Loore. The latter—who wrote the copy for the video, as well as edited and produced it—is the cofounder of a one-man AI creative studio in Belgium called Kitchhock. The company has been creating all types of videos since 2011, back when there was no Seedance or Veo. But now, De Loore is using his creative chops and the latest generative video AI tech to make real ads for real companies in Belgium through his AI video studio arm, AiCandy. Energym is just a satirical ad designed to promote his own business and destroy the very core of those who make the technology that powers his business. (Incidentally, Energym is the same name as a company that makes a very real $2,800 static bicycle designed for exercise and to produce electricity, but it’s not related to AiCandy’s fake ad.) The Energym commercial is obviously tongue in cheek, as are many other videos we have seen in recent months that make fun of our increasing dependency on artificial intelligence and its power. But this one hits particularly hard. For some, it may be the Black Mirror-esque nature of it. (There’s an actual episode of the British TV series that feels like an extended version of the ad.) Personally, it connects with the WTF-ness that the current AI situation is provoking in me on different levels. The fear of what’s next. The dread of seeing reality destroyed. The disgust for the fat cats that are running this charade with no checks and nobody’s permission. I find it hard to pinpoint what it is. It’s just an absurd exaggeration with no logical basis that hits too close for comfort—and, at the same time, makes me happy. View the full article
-
Andrew Ng says AGI is decades away—and the real AI bubble risk is in the training layer
What began as a race to build better AI models has escalated into a competition for compute, talent, and control. Foundation models—large-scale systems trained on vast datasets to generate text, images, code, and decisions—now underpin everything from enterprise software and cloud infrastructure to national digital strategies. The industry’s language around AI has grown more ambitious—and more elastic. Agentic AI has leapt from research papers to Davos billboards, while artificial general intelligence, or AGI, now appears routinely in investor decks and earnings calls. Definitions have begun to blur. Some companies quietly lower the bar for what qualifies as general, stretching the term to encompass incremental productivity gains. Yet the economic results, particularly measurable returns on AI investment, remain uneven. According to PwC’s 2026 Global CEO Survey, 56% of 4,454 CEOs across 95 countries reported neither increased revenue nor reduced costs from AI over the past 12 months. Only 12% achieved both. Even so, 51% plan to continue investing, despite declining confidence in revenue growth. The result is a widening gap between engineering reality, commercial storytelling, and public expectation. Few voices carry as much authority—or have shaped modern AI as directly—as Andrew Ng. The founder of DeepLearning.AI and Coursera, executive chairman of Landing AI, and founding lead of the Google Brain team, Ng has helped define nearly every major phase of the field, from early deep-learning breakthroughs to the current wave of enterprise deployment. He has authored or coauthored more than 200 papers and previously led the Stanford AI Lab. In 2024, he popularized the term agentic AI, arguing that multistep, tool-using systems capable of executing workflows may deliver more near-term economic value than simply scaling larger models. In an exclusive conversation, Ng offered Fast Company a reality check. He says true AGI—that is, AI capable of performing the full breadth of human intellectual tasks—remains decades away. The true competitive frontier, meanwhile, lies elsewhere. This conversation has been edited for length and clarity. You helped popularize the term agentic AI to describe a spectrum of autonomy in AI systems. How did you come up with it, and how has the concept evolved as multi-agent systems move into enterprise production? I began using the term almost two and a half years ago, though I didn’t publicly take credit for it at the time. I started using it because I felt the community needed language that shifted the focus toward AI systems capable of taking multiple steps of reasoning and action—not just a single prompt-and-response exchange. More specifically, I felt there would be a spectrum of AI systems—some slightly autonomous or slightly agentic, and others highly agentic—where they take many steps of actions and work for a long time. No one was using the term agentic to describe this concept before I began using it. I started introducing it in my newsletter and in talks at conferences and industry events, and it quickly gained traction there. I didn’t expect marketers to run with it the way they did. When I attended Davos this year, I saw the word plastered on the sides of buildings. Even outside San Francisco, agentic now appears on billboards. I did want to intentionally promote the use of the term, but seeing how common it has become, I sometimes wonder if I overdid it. Enterprise adoption of agentic AI is accelerating, yet many organizations are struggling with integration, governance, and measurable ROI. Why is it so? Two years ago, there was intense hype around AI’s risks and dangers, among other concerns. Last year, businesses began shifting their focus toward real-world implementation. This year, the conversation has moved firmly to ROI. Even though many companies are not yet seeing strong returns, they continue to invest because they understand that AI will eventually deliver value. The discussion has shifted from excitement about what AI might do to a more grounded focus on how it can generate real economic impact. There’s also an interesting split-screen dynamic emerging. On one hand, many businesses say agentic AI is not yet delivering meaningful ROI, and they’re right. At the same time, teams building agentic workflows are seeing rapid growth and real, valuable implementations. The agentic movement still has very low penetration, but it is compounding quickly. What are the most significant mistakes enterprises make when deploying agentic systems at scale, and how should leaders rethink their technology and operating models to overcome them? Many businesses are pursuing bottom-up innovation, which is valuable, but the limitation is that it often leads to point solutions that deliver incremental efficiency gains rather than transformative change. If AI automates just one step in a process, for example, it might save an hour of human work and reduce costs. That’s useful and worth doing, but it doesn’t fundamentally change the business. Much of today’s AI deployment falls into this category—incremental improvement rather than full transformation. To unlock real value, companies need to look beyond optimizing individual tasks and start reimagining entire workflows. Doing so requires top-down leadership. Often no single person working on one step has the authority to reshape the entire process, which is why executive-level direction becomes essential. Real impact comes from tailoring AI strategy to each organization’s specific context rather than following generic industry playbooks. There is a growing debate about whether we are in the midst of an AI bubble or simply an early infrastructure build-out comparable to the internet era. How do you distinguish between speculative hype and genuinely durable AI value being created today? At the application layer, I don’t think we’re in an AI bubble. AI is expanding rapidly across business use cases—how we process legal and technical documents, manage customer success workflows, conduct research, and much more. I would like to see more investment in AI applications and inference infrastructure. Right now, there simply isn’t enough inference capacity, and worries around rate limits exist. The more interesting question about a potential bubble sits in the model training layer, where infrastructure spending continues to surge. If any risk exists, it’s highest there because the largest investments are concentrated among a small number of players. When companies build highly specialized hardware that can only be reused for inference with some inefficiency, the risk of overbuilding increases. I don’t think we’re overbuilding right now, but if any part of the AI market faces that possibility, it’s the training layer. As the industry moves beyond a single-model mindset toward more diverse agentic systems, how should enterprises think about AI architecture? Is there likely to be one dominant framework for building scalable, real-world AI systems—or will organizations need a more flexible approach? Software can range from five lines of code to massive systems that run for years. Because of that range, there won’t be a one-size-fits-all approach to building or governing these systems. Just as we don’t use a single framework to manage everything from simple scripts to enterprise platforms, we won’t rely on one architecture for agentic AI. Human work itself is incredibly diverse—from basic tasks like spell-checking to analyzing complex financial documents. Since the work varies so much, the AI systems we build will also need to vary. One principle my teams follow when building agentic AI systems is speed, as continuous improvement is essential. Our typical cycle involves building carefully to avoid major risks, testing with users, gathering feedback, and refining the system until it truly works well. That rapid loop is what helps teams build reliable, high-performing systems faster. Agentic AI is rapidly increasing systems’ ability to reason and act with limited human intervention. Does the rise of agentic architectures meaningfully accelerate the path toward AGI, or are we still far from true general intelligence? Most of the public thinks of AGI as AI that is as intelligent as people, and one useful definition is AI that can perform any intellectual task a human can. You and I could learn to fly an airplane with maybe 20 hours of training, learn to drive a truck through a forest, or spend a few years writing a PhD thesis. Most humans can do these things. We’re still very far from AI meeting that definition of AGI. For alternative definitions that some businesses have put forward—definitions that dramatically lower the bar—you could argue we already achieved AGI. There’s a good chance that under these lower-bar definitions, some businesses will soon try to declare success. But that won’t mean AI has reached human-level intelligence—it will simply mean the definition has been reworked to fit a much lower threshold. Maybe a year ago, AGI felt 50 years away. Over the past year, perhaps we’ve made a solid 2% of progress, with another 49 years to go. These numbers are metaphorical, so don’t take them too seriously. [Laughs] But we are closer than before, yet many decades away from an AI that matches human intelligence. If you stick with the original definition—aligned with what people genuinely imagine AGI to be—we remain very, very far away. Is geopolitical fragmentation reshaping global AI strategy for both governments and enterprises? One of the other big themes I’m seeing is sovereign AI. The world is becoming more fragmented, and there’s a lot of discussion about how nation-states want to make sure they have access to AI without needing to rely on other nations or any single company that they may not fully trust or be able to rely on in the long term. Governments and regions are thinking carefully about how to build and maintain their own AI capabilities so they can remain competitive and secure. As AI becomes more central to economic growth and national security, this question of who controls the infrastructure and models becomes much more important. So alongside enterprise adoption, there’s also a growing geopolitical dimension to AI deployment. In 2026, as enterprises search for real economic returns from AI, what leadership decisions and workforce shifts will ultimately determine whether organizations capture meaningful value from agentic systems? Leadership matters. When I work with CEOs, I see decisive moments when the C-suite must think strategically about what to invest in and then place those bets thoughtfully, guided by a clear understanding of what the technology can and cannot do—not just the surrounding hype. In periods of transformation, leadership decisions determine whether an organization captures real value from AI or merely experiments at the margins. I often speak with CEOs before they set a major strategic direction. No one knows exactly where AI will be in a few years, so we are operating in a kind of fog of war. But uncertainty does not mean we don’t know anything. Teams and partners who understand the technology well can narrow that uncertainty significantly and make far more informed decisions. At the same time, everyone should learn to code—or at least learn to build software with AI. AI has lowered the barrier to creating custom tools. Today my marketers, recruiters, HR professionals, and financial analysts who use AI to write code are already more productive than those who do not. When I hire, I increasingly prefer people who know how to build with AI assistance. I may have been early on this shift, but I now see more startups and established companies moving in the same direction. Just as it became unthinkable to hire someone who could not search the web or use email, I am already at the point where I hesitate to hire knowledge workers who cannot use AI to build or automate with code. View the full article
-
February 2026 Google Discover Core Update Is Done Rolling Out After 3 Weeks
Google's February 2026 Google Discover Core Update has officially completed rolling out after just over 3 weeks. This update started on February 5, 2026, and was completed on February 27, 2026.View the full article
-
Google February 2026 Discover core update is now complete
The Google February 2026 Discover core update has finished rolling out, starting on February 5, 2026 and now completing just over 21 days later on February 27, 2026. This was the first confirmed Google Search update this year, and the first-ever Discover-only update that Google announced. Normally, Google core updates impact both Search and Discover, but this is only impacting content within Google Discover. U.S. and English. Google said the update currently only impacts English-language users in the U.S. But Google said it will expand to all countries and languages in the coming months. More details. Google said the Discover core update will improve the “experience in a few key ways,” including: Showing users more locally relevant content from websites based in their country. Reducing sensational content and clickbait. Highlighting more in-depth, original, and timely content from sites with demonstrated expertise in a given area, based on Google’s understanding of a site’s content. Because the update prioritizes locally relevant content, it may reduce traffic for non-U.S. websites that publish news for a U.S. audience. That impact may lessen or disappear as the update expands globally. Google also made some tweaks to the Get on Discover help page – so review that page as well. More details. Google added that many sites demonstrate deep knowledge across a wide range of subjects, and its systems are built to identify expertise on a topic-by-topic basis. As a result, any site can appear in Discover, whether it covers multiple areas or focuses deeply on a single topic. Google shared an example: “A local news site with a dedicated gardening section could have established expertise in gardening, even though it covers other topics. In contrast, a movie review site that wrote a single article about gardening would likely not.” Google said it will continue to “show content that’s personalized based on people’s creator and source preferences.” During testing, Google found that “people find the Discover experience more useful and worthwhile with this update.” Why we care. If you get traffic from Google Discover, you may have noticed changes in that traffic. Again, it should be U.S. English only and only impact your Discover traffic. I will say, there has been a lot of Google Search organic volatility but Google has not confirmed any of those reports. Google recommends that if you need guidance, Google has “general guidance about core updates applies, as does our Get on Discover help page” in those help documents. View the full article
-
Greens’ shift to left wins over disaffected Labour voters
Environmental concerns have moved down agenda under leadership of Zack PolanskiView the full article
-
How to create connection at work that doesn’t feel forced
Early in my career, a colleague and I made a shared commitment one summer to eat healthier. Salads. Smoothies. The full routine. Like many well-intentioned plans, our discipline began to fade after a few weeks. Eventually, we introduced what we jokingly called Grease Wednesdays, a weekly cheat day as a reward for all our good behavior. Every Wednesday, one of us would head out to grab fast food, and we’d hide away in a small boardroom to indulge in our shared lack of nutritional discipline. At first, it was just the two of us, chatting with laptops closed and fries on the table. And then coworkers began peeking into whatever boardroom we were in, curious about the laughter. Eventually, someone asked if they could join. Then another. Within weeks, we had outgrown the small meeting room. Within months, we had moved into the department’s largest boardroom to accommodate the growing crowd. What started as a casual indulgence became a shared ritual. And without intending to, Grease Wednesdays began to change our department culture. We all began to get to know each other as individuals, with pets and families and hobbies. The ritual also smoothed tensions between departments, built friendships between unfamiliar teammates, and helped us realize we hadn’t felt all that connected before. Recent research shows the disconnection I witnessed in my own team is now part of a broader workplace trend. A 2025 survey of U.S. workers found nearly 40% report feeling lonely at work, and employees who lack social connection are significantly more likely to consider leaving their jobs because of it. When people feel they belong, trust builds, collaboration accelerates, performance rises, loyalty deepens, and well-being improves. When they don’t, silos form, trust erodes, and discretionary effort fades. Take these numbers: a recent BetterUp survey found that workplace belonging leads to a 56% increase in job performance, a 50% reduction in turnover risk, and a 75% decrease in employee sick days. THE PROBLEM WITH OVER-ENGINEERING CONNECTION Belonging is not accidental; it’s cultural. And culture is shaped, reinforced, and protected by a leader’s vision, values, behavior, and accountability, including what I call positive accountability. But this is where many organizations misstep. When leaders notice disconnection, the instinct is often to formalize solutions with more engagement meetings, structured team building, and mandatory social events. Yet forced connection and fun rarely produce authentic trust. In fact, over-engineering connection can make people more guarded. For instance, research cited in a study by the University of Sydney found that when team-building activities feel mandatory, they can create resentment and pushback among employees. Belonging grows best in environments that feel natural, voluntary, and human, not observed or measured. If you want to improve connection and belonging in your workplace while avoiding forced connection, here are some steps you can take. DESIGN INTENTIONAL SPACES What made Grease Wednesdays powerful wasn’t the food. It was the opportunity that a casual ritual created. We had, quite by accident, built a small, repeatable, low-pressure interaction in which familiarity could grow. Design offers a strong middle ground between compulsory team-building exercises and complete social neglect. The key here is to design small, optional, and repeatable opportunities that humanize the workplace. For in-person teams, you can host walking one-on-one meetings, Friday coffee drop-ins, no-agenda team lunches, or cross-department donut runs. For remote teams, you could host 15-minute morning online coffee drop-ins or no-agenda team virtual lunches, and share team celebrations of birthdays, anniversaries, and project completions. Keep it light; keep it optional; keep it ritual. MODEL OPENNESS Studies in organizational research find that when leaders are open, available, and accessible, employees feel more psychological safety. Psychological safety, coined by organizational psychologist Amy Edmondson, is the shared belief within a team that it is safe to take interpersonal risks, like speaking up with ideas, questions, concerns, or mistakes, without fear of punishment, humiliation, or retribution. To build psychological safety in teams, leaders can model openness. Do that by admitting when you don’t know something, sharing a decision you’ve reversed (and why), and publicly thanking a team member who challenged you. Another way you can model openness is by offering positive team accountability by sharing the successes they see and are proud of within the team. For example, one leader I work with sends out an email to his team every two or three weeks. The irregularity of timing is actually effective by design, making the email feel more authentic. REWARD CONNECTION, NOT JUST OUTPUT Social psychology research shows that reciprocity in the workplace builds trust, cooperation, and positive relationships. The principle of social reciprocity, or when one recognizes and responds to positive actions, contributes to stronger workplace dynamics and mutual respect—the core components of connection and belonging. One way to do this is to shift what gets publicly praised. If the only Slack shout-outs are for revenue, speed, and delivery, people will assume that is all that matters. Instead, reward connection by recapping projects in team meetings by asking, “Who helped make this possible?” You can also celebrate the people who mentor, unblock, and build bridges across teams. When helping behavior is acknowledged, rewarded, and career-relevant, connection stops being invisible labor and becomes part of how success is defined. Full offices don’t cure loneliness, but intentional culture does. When leaders design natural rituals, model openness, and reward connection as deliberately as they reward performance, belonging is no longer accidental—and becomes part of how work actually works. View the full article
-
The middle manager’s playbook for staying sane and moving up
Being a middle manager often feels like living in two worlds at once. On one side, executives cascade big goals and sweeping strategies. On the other, teams look to you for clarity, advocacy, and daily guidance. You’re constantly reconciling top-down demands with bottom-up realities, often with too little time and too few resources to satisfy either side. The paradox of the role is stark: Middle managers carry enormous responsibility for execution but don’t always have the authority to make critical decisions. You’re expected to deliver results on budgets you don’t control, within structures you didn’t design, and through policies you didn’t write. This tension is one of the biggest sources of chronic strain. One survey found that middle managers reported higher burnout rates (36%) than non-managers, while another showed that 71% are “sometimes” or “always” overwhelmed at work. But here’s the good news: The middle isn’t just where pressure piles up. It’s also where strategy becomes reality, where culture is lived (or lost), and where agility gets tested in real time. If you can reframe the squeeze as an opportunity, middle management becomes less a grind and more a proving ground. Here are four ways to turn the pressure into potential: BUILD YOUR COALITION If you think of your team only as your direct reports, you’re missing the larger playing field. Work today is inherently cross-functional, which means your effectiveness hinges on your ability to influence sideways and upward, not just to manage downward. Peers hold the resources and expertise you need. Leaders above you control priorities, approvals, and air cover. Without credibility in those directions, even flawless execution within your own group can collapse at the edges. Research shows that misalignment between teams is one of the biggest drivers of wasted work. When priorities or interpretations differ, teams can spend weeks pulling in opposite directions. Middle managers who proactively build peer alignment surface these gaps early and save everyone time and frustration. The fix isn’t complicated, but it is intentional: cultivate your network. A short, well-timed conversation with a peer or senior leader can prevent the kind of breakdowns that leave your team spinning. Think of it less as “networking” and more as preemptive damage control. The middle managers who thrive are the ones who invest in relationships that make the work move. MASTER THE ‘PRACTICE’ OF LEADERSHIP Leadership is often packaged as a set of sweeping competencies or treated like a fixed trait you either have or don’t. In reality, leadership is shaped over time, forged through daily choices, interactions, and repeated practice. While traditional leadership development focuses on broad skills taught in workshops or courses—what we call horizontal development at Sounding Board—many real-world challenges require something deeper. Vertical development helps managers think more complexly, adapt to evolving contexts, and lead with lasting impact, not just quick fixes. This kind of development happens through practice, not theory. Neuroscience supports it: Consistent, real-world repetition strengthens the neural pathways that anchor adaptability and retention. At BTS, we’ve seen that transformational leadership often hinges on unlocking specific mindset shifts, patterns where leaders typically get stuck and need to evolve to grow. So, how do you start? Find smaller moments to experiment. Instead of waiting for a performance review, try a quick debrief after a call with a direct report. Test a new communication approach in a team meeting before the next town hall. You can even name your intention to those around you. Letting others know you’re trying something new sets expectations and invites helpful feedback. LEVERAGE AI FOR ON-DEMAND SUPPORT Your toughest challenges don’t show up as theory; they show up in the form of messy, human situations: a disengaged direct report, a senior leader who keeps moving the goalposts, a peer who won’t align. These problems don’t have one-size-fits-all solutions, which is why coaching is so powerful. For decades, personalized coaching was a privilege reserved for executives. But with AI practice bots paired with guidance from real coaches, middle managers can get development that’s personalized and scalable when they need it. These tools let you rehearse tough conversations, like giving feedback or delegating more effectively, in a low-stakes environment. Coaches help you translate insights into actions and longer-term mindset shifts. The result is leadership growth that’s less abstract and more actionable. The smartest move? Start small. Pick one conversation you’ve been avoiding and rehearse it with an AI conversation bot. You’ll uncover blind spots, test new approaches, and walk into the real thing with more confidence and control. MAKE UNCERTAINTY YOUR PLAYGROUND The defining condition of modern work is uncertainty. Markets swing, technologies disrupt, priorities pivot. If you wait for clarity, you’ll always be behind. The managers who thrive aren’t the ones who resist ambiguity, but those who use it as a catalyst to experiment and learn. One biopharmaceutical company I worked with recognized this when it expanded leadership development beyond senior executives to include middle managers. After providing leadership training focused on managing ambiguity and integrating AI into workflows, the company paired each manager with a coach to help translate learning into action. The result was faster decision-making and stronger cross-functional collaboration during a major pivot. When you stop treating uncertainty as a threat and start treating it as a laboratory, you shift from surviving change to shaping it. With these practices, middle management isn’t a burden, but a launchpad for growth. View the full article