All Activity
- Past hour
-
Google Tests AI Headlines, Rolls Out Spam Update – SEO Pulse via @sejournal, @MattGSouthern
Google tests AI headline rewrites in Search, completes the March spam update in under 20 hours, and adds AI content labeling to structured data docs. The post Google Tests AI Headlines, Rolls Out Spam Update – SEO Pulse appeared first on Search Engine Journal. View the full article
-
If your local rankings are off, your map pin may be the reason
The local SEO community remains locked in a permanent debate over the “hide address” toggle for service area businesses (SABs). Most owners view this switch as a simple privacy setting. In reality, it’s a high-stakes decision that dictates how Google’s algorithm interprets your physical relevance. Does your defined service area influence where you rank? Does hiding your street address suppress your visibility in the local pack? Most importantly, does Google purge that data from its system, or does your map pin simply become an invisible anchor? These are fundamental and relevant questions of how proximity functions when you choose to go off the grid. How Google actually places your map pin To be clear, the address and the map pin aren’t the same thing. When you enter an address into your Google Business Profile, Google doesn’t simply drop a pin. It runs the address through its geocoding engine to resolve the text string against its internal database. To understand why a map pin ends up in a highway median or a city center, you must examine Google’s internal data models: GeostoreAddressProto: How Google stores and parses a business address. GeostorePointProto: How Google stores the actual map pin location. GeostoreServiceAreaProto: How Google stores the regions a business serves. Google is looking for a match it can trust. When it finds a high-confidence match, it places the pin specifically at the rooftop of your building. Once you understand how these three work together, you can get some clarity on why Google appears to rank SABs differently in the local map pack. Is your map pin placement a bug or the default? Make no mistake: this isn’t a bug. It’s a fundamental breakdown in how Google translates a text string into a physical coordinate. When this translation fails, your business ends up with a misplaced map pin, which directly misplaces your local proximity authority. When Google can’t find a high-confidence match at the building level, it doesn’t just leave your pin floating. Instead, it falls back to the most reliable geographic feature it can confidently resolve. In most cases, that fallback is the city centroid (the geographic center of the municipality tied to your address). Google’s own Geocoding API documentation outlines this fallback logic, explaining why pins for businesses with perfectly visible, verified addresses sometimes end up dumped in the middle of a city. Simply put, if your address isn’t recognized by Google’s internal systems, the geocoding process lacks the confidence to place the pin precisely. If Google can’t reconcile your GeostoreAddressProto with a high level of certainty, it may not anchor your GeostorePointProto to your building’s rooftop. Dig deeper: The proximity paradox: Beating local SEO’s distance bias When does geocoding lose confidence? Geocoding loses confidence when a business shares a generic building footprint, lacks a distinct suite number, or is placed in a newly developed zone that Google’s Street View API hasn’t yet mapped. A building that’s newly constructed or recently added to a commercial complex may not yet exist in Google’s geographic database with enough detail for a rooftop-level match. The street and city exist, but the specific parcel hasn’t accumulated enough mapping data for Google to confidently place a pin. To understand why, it helps to know how Google’s geocoding data actually gets populated. Google’s own developer documentation states that data collection is a periodic process, and new construction data can take time to be reflected in Google Maps. The address hierarchy Google geocodes against is built from a combination of sources, including satellite imagery updates, municipal records, and USPS address data, none of which updates in real time. When the API resolves an address, it returns one of four location types: ROOFTOP, RANGE_INTERPOLATED, GEOMETRIC_CENTER, or APPROXIMATE. The suite number problem I’ve said this to clients more times than I can count. It seems like a minor formatting detail. It isn’t. When a business enters something like 1234 Main Street, Suite 200, in Address line 1, Google’s geocoding engine attempts to resolve that entire string as a street address. Suite numbers are unit identifiers. They exist within buildings. They aren’t street-level geographic data, and Google’s geocoding process doesn’t use them to identify rooftop locations. Embedding a suite number in Address line 1 introduces a conflict into the geocoding query that the system can’t cleanly resolve against a physical coordinate. Instead of anchoring the pin to your building, the geocoding process encounters a string it can’t fully parse at the street level, loses confidence, and falls back, often all the way to the city centroid. This may cause clients to drive to another location or the middle of the highway. Proximity at the pin vs. proximity at the address A profile verified at a physical address doesn’t rank based on the visible address. I recently managed a new listing where a geocoding conflict forced the map pin to the city center of Houston, miles from the actual office. While the text on the profile showed the correct street address, the ranking was anchored entirely to a misplaced coordinate in the downtown centroid. In this instance, a suite number was embedded directly into the primary address field. When Google’s system can’t cleanly parse a street number and name, it often defaults to the city centroid as the best available data point. This isn’t an edge case. Whether it’s a suite number on the wrong line or a new construction site, these formatting errors trigger geocoding failures that are notoriously difficult to unwind. The client’s ranking data confirmed the technical reality. For high-competition terms like “water damage restoration,” the business didn’t rank based on its physical office. It ranked based on where the pin was dropped. If your pin is in a highway median or a city center due to a formatting error, that is where your proximity authority lives. Map ranking in downtown Houston Map ranking at the office Get the newsletter search marketers rely on. See terms. What this means for service area businesses If you have a service-area business, the stakes are higher, and the scenarios are more complex. When Google reprocesses that address, and the geocoding fails to anchor cleanly from the beginning, the business owner has no easy way to know. A storefront owner can open Google Maps, pull up driving directions to their location, and immediately see where the pin landed. An SAB with a hidden address can’t do the same quick check. The address isn’t visible on the profile, and the pin placement isn’t clearly surfaced in the dashboard or on Maps. The business is left with poor ranking reports and no obvious explanation. They may never realize the pin drifted at all. Their verified address may be a home office or a shared workspace, and if it’s a shared workspace, the geocoding problem gets worse. Regus locations and similar co-working buildings are among the most geocoding-hostile addresses an SAB can use. These are large commercial buildings with dozens or hundreds of unit numbers, multiple tenants, and high address turnover. My hypothesis is that Google’s geocoding engine assigns lower confidence to these addresses precisely because the unit-level data is so dense and inconsistently mapped. The result is a pin that may never anchor properly to begin with, and an SAB operator who has no easy way to verify where Google actually thinks they’re located. Dig deeper: The local SEO gatekeeper: How Google defines your entity The Farmington Hills fallback My business’s GBP functioned as a verified storefront in Farmington Hills for years. Three years ago, I moved the operation to a new office in Pontiac and updated the address accordingly. The listing appeared as a storefront until I triggered a reverification while testing a separate case study. Because I work primarily from home, and hadn’t invested in signage at the new Pontiac location, Google forced the profile into service area business status. Even though the dashboard displayed a Pontiac address for several months, the map pin reverted to Farmington Hills as soon as I toggled to hide the address. This fallback exists behind the scenes, effectively anchoring the business to a location it hasn’t occupied in over a thousand days. This is a ranking disaster for any business owner. I struggle to rank in my city for the “marketing agency” category because Google is calculating my proximity from an old office. If a business transitions from a storefront to an SAB after changing addresses, editing the existing listing is a risk. I was set up as a storefront at the new address for several months. The most effective path forward is to create a new listing for the business and request a review transfer. This can’t be fixed by Google support. Supporting evidence: What Google’s own patents say Google has filed and been granted multiple patents that describe the underlying systems at work. These patents are directly relevant to how geocoding, pin placement, and local ranking interact. Patent IDTitleImpact on Local SEOUS8312010B1Local Business Ranking Using Mapping InformationOutlines the core pipeline connecting an address to a map pin, establishing that the inputted address and the resolved geocode are two separate entities.US8046371B2Scoring Local Search Results Based on Location ProminenceDescribes a dual scoring system: documents within a geographic area are scored by location prominence factors (authoritative document score, citation volume, review count, and mention count), while documents outside the area are scored by distance from a defined center point such as a postal code centroid or the midpoint of the active map window.US20090177643Geocoding Multi-Feature AddressesExplains how ambiguous or improperly parsed address components produce lower-confidence geocode outputs, resulting in broader map pin placements rather than rooftop-level matches.US7894984B2Digital Mapping SystemDescribes the geocoding/geomap server that converts a street address into a single latitude/longitude coordinate and overlays it as a location marker on a map image. Establishes the mechanical basis for map pin placement and documents that pin position is derived from the resolved coordinate, not the inputted address. Best practices for properly anchoring your map pin A well-geocoded address with a narrow service radius gives Google the most confident, stable picture of where your business operates. Check your Address line 1: Suite numbers, unit numbers, floor numbers, and building names belong in Address line 2. Line 1 should contain only the street number and street name. Check whether your building geocodes cleanly: You can test this in Google Maps directly, or search your address in the developer’s geocoding page and see where the pin lands. Or more importantly, see how Google is parsing the address, and enter it the same exact way. Be prepared for verification: Correcting a geocoding conflict in an existing profile almost always triggers a new verification request. This is expected. Work through it. Don’t make additional edits until verification is complete, as multiple pending changes can restart the cycle. Why geocoding confidence is your local ranking foundation The friction between an address string and Google’s geocoding confidence isn’t a minor technical glitch. It’s a fundamental ranking blocker. Google values data stability and confidence over your recent dashboard edits. If you’re struggling with a pin that refuses to anchor, or an SAB that won’t rank, you’re likely fighting a geocoding pin placement issue that can’t be solved with standard optimizations or Google support, for that matter. Stop trying to out-content a broken map pin. It’s the ultimate proximity indicator that Google needs to confidently rank your business. The underlying issue isn’t complicated. Google needs a clean, parseable address string to anchor your pin at the building level. View the full article
- Today
-
3 ways to take the ‘work’ out of networking
You’ve spent years building a robust professional network. You’ve cultivated relationships with peers, mentors, and industry leaders. So when you signal that you’re exploring new opportunities, you expect your network to perform. Yet too often, promising conversations dissolve into silence. Warm introductions never materialize. Emails go unanswered. This isn’t a reflection of your professional standing. It’s a design problem: you’re making it too hard for people to help you. The fix is straightforward. Make it easy. Here are three ways to do so. Ask To Write to Their Contact Directly When you reach out to a contact seeking an introduction to a decision-maker, a common response goes something like this: “Absolutely — send me your résumé and I’ll forward it to see if there’s interest.” It sounds helpful, but rarely is. The fundamental problem: you’ve just handed over control of your own job search to someone with a dozen other priorities. Even the most well-intentioned contact may not follow through—because the timing isn’t right for their colleague (the chances they need your résumé at any given moment are small), because it slipped off their radar, or because the introduction they made on your behalf didn’t do you justice. The solution is to reclaim the driver’s seat. When a contact offers to pass your résumé along, respond with something like: “I really appreciate it. To save you time, could I reach out to your colleague directly and simply mention that I was referred by you? I’m also looking to build a relationship for opportunities now or down the road, so I would rather not forward a resume that implies I need a job quickly. Would this work?” This proposal removes the burden from your contact while giving you control over the pitch. It also avoids the résumé-forward trap—a résumé implies “please hire me now,” when your real goal is to get an informational meeting with a decision-maker and then keep in touch for future opportunities or get additional referrals. Half of your networking contacts will agree, and now you can use their name to gain attention: “Subject: Referred by [Contact], re: [Topic].” But what about the contacts who want to make the introduction themselves? Send a Forward-Friendly Email Many contacts will respond with something along the lines of “Let me reach out to my colleague first to see if they’d be interested in speaking with you.” In that case, offer to send them a forward-friendly email. This move dramatically improves the likelihood that they will actually follow through, because you’ve reduced their effort from 15 minutes spent figuring out how to pitch you to just 2 minutes of forwarding. You’re also improving the odds that their contact will want to meet with you, since you can include a field-tested pitch explaining why a conversation could be mutually beneficial. The content is virtually the same as the “Referred by …” email; just start it differently: “Subject: Introduction to Katherine Johnson, re: BigCo Dear Rosalind, Thanks for offering to forward my information to Katherine. As discussed, below I’ve shared my background and why I believe a meeting could be mutually beneficial.” One important note on content: resist the urge to attach your résumé unless there’s a specific opening you’re pursuing. Instead, use your LinkedIn profile as your “low-key résumé.” The impressive content in your thoroughly filled-out profile will drive credibility without signaling desperation. Have a Clear Job Target Too many executives prolong their searches because they position themselves too broadly, not wanting to miss an opportunity. The problem: your network finds it harder to advocate for you when your message is watered down across multiple job targets. Worse, you may be asking your contacts to do the heavy lifting of translating your varied background into specific opportunities. That is your job, not theirs. One client came to me after a long, frustrating search. I quickly saw the issue: she was pitching herself to her network as open to Partnerships leadership roles at Fortune 500 companies, COO roles at startups, or Commercialization roles at any company. Three quite varied targets, not connected by a strong theme, led to ineffective messaging. Once we prioritized, she re-launched her outreach with a focused, powerful pitch for COO roles at startups. Within weeks, the interviews began to materialize. A narrow pitch may feel counterintuitive—but it’s what makes your networking more effective, since people can refer you more easily when they see you clearly in a specific role. The Bottom Line Your network wants to help. Your job is to make that help feel effortless—not like a second job. Write the emails they can forward, or email their contacts directly. Do the targeting they shouldn’t have to. And keep yourself in the driver’s seat. The opportunities will follow. View the full article
-
Noodles & Company closed dozens of restaurants last year. Here’s why is the stock price soaring in 2026
As part of a strategic move to optimize its store footprint, Noodles & Company closed 33 company-owned restaurants in 2025. In January, the chain said it would close dozens more stores this year. However, despite the shrinking restaurant count, sales have grown. The fast-casual eatery held its fourth-quarter and full-year 2025 earnings call on Wednesday, March 25. It reported that comparable store sales increased 6.6% in the final quarter of 2025. Sales growth and traffic are also up as of early 2026. Following the strong earnings report, shares of Noodles & Company (Nasdaq: NDLS) soared over 50% on Thursday. The stock is up almost 60% year to date as of premarket trading on Friday. That’s a significant contrast to the broader Nasdaq Composite, which is down 7.78% for 2026 so far. How store closures have helped same-store sales Despite having closed more than 30 stores in 2025, Noodles & Company reported system-wide comparable store sales growth of nearly 7% in the fourth quarter of 2025. On Wednesday’s earnings call, CEO Joe Christina told investors that the restaurant closures “resulted in a material transfer of sales to nearby locations . . . which also favorably impacted margins.” And store closures haven’t stopped customers from spending money. CFO Mike Hynes explained during the call that a significant portion of Noodles & Company customers place takeout or delivery orders, so they’ve continued to order from nearby locations that remain open. “The most meaningful impact is the post-closure transfer of sales to nearby Noodles & Company restaurants, which is driving a significant increase to our company-wide restaurant-level profits.” New menu items also drove traffic Menu changes and limited-time offerings have also played a significant role in driving sales and traffic growth, Christina said on the call. “A great example is chili garlic ramen, which we introduced as a limited time offer in October,” he said. “Inspired by trending ramen hacks, this brothless bowl delivered the buttery, spicy, umami-packed flavors guests were already craving. It quickly became one of the strongest [limited-time offers] in our history.” He noted that the trendy dish resonated well with loyalty program members and also brought in new customers. Because of its success, Noodles & Company is evaluating other ramen recipes. Christina also credits the fast-casual noodle chain’s value-focused messaging, “giving guests compelling meal combinations and an attractive price point that delivered balance, variety, and everyday affordability without compromising quality, while also raising consumer awareness to our new menu offerings.” Hourly workers have been most impacted by the store closures While an optimized physical footprint may be producing results for the company, store closures have come at a real cost to employees, primarily hourly workers. According to Noodles & Company’s year-end 2025 10-K filing with the Securities and Exchange Commission (SEC), the fast-casual eatery employed approximately 6,000 hourly workers as of December 30, 2025, down from 6,800 a year prior. That’s a net loss of roughly 800 hourly jobs in one year. Meanwhile, the company’s salaried worker headcount remained unchanged during that same period, with 500 salaried workers reported for both years. View the full article
-
Google Adds Scenario Planner, Performance Max Updates, And Veo – PPC Pulse via @sejournal, @brookeosmundson
This week’s PPC Pulse covers Performance Max reporting updates, GA4 budget planning tools, and Veo AI video in Google Ads. The post Google Adds Scenario Planner, Performance Max Updates, And Veo – PPC Pulse appeared first on Search Engine Journal. View the full article
-
With Sora’s death, AI’s age of frivolity may be ending
Hello again, and welcome back to Fast Company’s Plugged In. Before we get underway, a little self-promotion: Apple’s 50th anniversary is on April 1. As the big day approached, I realized that many people present at the company’s creation were still very much with us. So I interviewed 23 of them for an oral history, “How Apple Became Apple: The Definitive Oral History of its Earliest Years.” It’s chock-full of great tales as told by everyone from cofounder Steve Wozniak to Liza Loop, the first Apple user. Hearing these pioneers reminisce, I felt like I had been there, too—and so will you, I think. Here’s the article. When OpenAI launched its Sora app last September, the video-centric social network arrived on a tide of buzzy goodwill. Its feed of 10-second video clips had a TikTok-esque vibe—except that it was filled with AI-generated stuff instead of anything remotely real. In less than a minute, Sora users could create digital doppelgängers of themselves that were eerily convincing for use in their own clips and, optionally, those created by others. The result was playful, goofy fun, and far more intriguing than Meta’s theoretically similar but painfully bland Vibes. But if Sora ends up being remembered for anything, it won’t be for existing. Instead, it will have made its mark by going away. On March 25, OpenAI announced that it was killing the app, along with the Sora API that let developers generate their own videos using the company’s technology. The decision appeared hasty: OpenAI still hasn’t shared details on when, exactly, Sora will cease to exist, or how users can download their videos for preservation. Most of the insta-reaction I’ve seen to Sora’s demise amounts to grave tap-dancing of one sort or another. People are helpfully explaining that the app was a stupid idea from the start, or assailing it as a slop machine that deserved its fate. But I’m not ashamed to admit that I will miss it. For reasons I wrote about shortly after its debut, escaping to Sora’s weird little world always brightened my day. For one thing, I found the app to be a genuine canvas for creativity, albeit in brief, inherently inconsequential bursts. My feed was full of fake commercials, fabricated vintage news clips, and other snippets of fantasy content that were like glimpses of bizarre alternate realities. An oddball crew of deceased celebrities—Larry King, Richard Nixon, Queen Elizabeth II—often starred in them, sometimes in uncannily convincing form and sometimes as vague approximations. On an internet that can feel unrelentingly grim, Sora’s essential absurdity made me laugh. Counterintuitively, I also found comfort in the fact that the app was all AI, all the time. Conventional social media such as Facebook, Instagram, and TikTok is now befouled by true AI slop, generated solely to try and attract eyeballs without working very hard. Being exposed to it always feels like an imposition. On Sora, however, I never had to wonder if something was real or not. It wasn’t, and that was the point. I do acknowledge that the app peaked early. The world needs only so many silly imaginary gadget commercials and clips of unlikely celebrities rapping—both my feed and my own ideas for prompts grew repetitive over time. If OpenAI had added more features, or let us create videos longer than 10 seconds, it might have helped the platform develop more substance. Now we’ll never know. Still, I’m not going to make the case that OpenAI’s seemingly abrupt decision to shutter Sora is a terrible mistake. It might actually be a commendable, responsible act—or even the beginning of a trend for the entire AI industry. On March 16, before that move was public, The Wall Street Journal’s Berber Jin reported that Fidji Simo, OpenAI’s CEO of applications, had sent a memo to employees declaring that it was time to get down to business. She sent it at a time when archrival Anthropic had made enormous inroads with its Claude Code software-generation tool, the hottest product in AI’s hottest category. “We cannot miss this moment because we are distracted by side quests,” Simo wrote. “We really have to nail productivity in general and particularly productivity on the business front.” One element of this strategy recalibration involves OpenAI releasing a “super app” that rolls ChatGPT, Codex, and the Atlas web browser into one piece of software, roughly akin to what Anthropic has already done with the Claude desktop app. As excerpted by Jin, Simo’s memo did not name-check Sora. In retrospect, though, her call to action left it a dead app walking. Rather than facilitating productivity, Sora was frivolous to its core. I certainly got sucked into it any number of times when I had better things to do. But OpenAI isn’t terminating Sora because it might divert users from more productive tasks. It’s doing it because it’s a pricey distraction for the company itself. That OpenAI is suddenly interested in self-discipline is news in itself. Until now, after all, its strategy has seemingly been to do, well, everything. ChatGPT in its current form is just the beginning. The company is also into enterprise agents! Health advice! Epoch-shifting gadgets! Browsers! Chips! Smut, though it’s been delayed! That’s before you get to the unprecedented investment in data center infrastructure it will have to build out to generate all that AI. Maybe a huge, wildly profitable company could reasonably attempt to digest such a sprawling menu of projects simultaneously. OpenAI is not that company. Like much of the AI in our lives, Sora has been running on a gigantic subsidy provided by venture-capital dollars. In November, Forbes’s Phoebe Liu guesstimated that OpenAI might be spending $15 million per day spitting out Sora videos. No analysis performed by an outsider stands a chance of nailing the precise cost, but this we know: Video generation is among the most computationally expensive AI tasks, and OpenAI had yet to book its first nickel of Sora user revenue. (It had inked a “landmark agreement” with Disney to use that company’s characters inside Sora, but that $1 billion deal is now off.) If Sora stood a chance of being a profit machine someday, absorbing its current losses—which, over the course of a year, would have likely been in the billions—might not have been wholly irrational. But substantial profit would have come only if the app’s user base had grown gigantic and OpenAI figured out a brilliant way to weave ads into the experience. Though not impossible, that feat would have required vast intellectual capital and tolerance for risk. By comparison, OpenAI focusing on ensuring that its Codex AI software-generation tool is a compelling alternative to Claude Code—one companies are happy to pay for—sounds dead easy. Who can blame Simo for opting not to pursue “side quests” when it’s imperative to get the core ones right? As rational as OpenAI giving up on Sora may be, I hope that it doesn’t represent an end to the theory that a rewarding social network might someday be built around AI-fueled content. The evidence that AI can make social experiences much, much worse is all around us. Given that the technology isn’t going anywhere, I choose to cling to the possibility that someone will figure out how to adopt it in a constructive manner. Maybe even one that won’t bankrupt the company that offers it. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on fastcompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company This Microsoft security team stress-tests AI for its worst-case scenarios The company’s Red Team simulates attacks to uncover risks before bad actors do. Read More → A top AI researcher explains the limitations of current modelsFrançois Chollet talks about his deceptively simple new benchmark test for AI models. Read More → Manus AI cleaned up my computer—for a price The desktop app can automate all kinds of tedious computing tasks, but the costs can quickly get out of hand. Read More → Exclusive: This new benchmark could expose AI’s biggest weakness ARC-AGI-3 tests whether models can reason through novel problems, not just recall patterns, a task even top systems still struggle to do. Read More → Writer wants to be the go-to AI tool kit for the enterprise With customizable ‘skills’ and step-by-step ‘playbooks,’ the company aims to help employees automate workflows without touching code. Read More → This brilliant browser tool purposely makes AI chatbots worseThe extension’s designer calls it a ‘tiny tool of digital sabotage.’ Read More → View the full article
-
Intuit thinks it’s found your company’s next CFO: AI
Alex Balazs has spent more than two decades inside Intuit, starting as an engineer working on early versions of QuickBooks Online, when moving financial workflows to the internet still felt experimental. Now, as CTO, he is helping lead a more radical shift: turning financial software into systems that can think and act on a user’s behalf. “This combines the speed and scale of AI with human judgment and accountability,” he tells Fast Company. For decades, financial software has functioned as a ledger, categorizing transactions and generating reports about what has already happened. That model is beginning to break. Advances in AI are pushing the category toward real-time interpretation and action, with software that can execute tasks and manage workflows rather than simply record them. The shift introduces a core tension. Financial systems demand precision, accountability, and auditability. AI systems operate probabilistically, producing outputs based on likelihood rather than certainty. As the stakes rise, so does the challenge of trusting machines with financial decisions. Intuit is pushing aggressively into that gap. The company, which controls more than 60% of the SMB accounting software market, is working to turn finance into what it calls a “system of intelligence,” a continuously operating layer that understands financial context and acts on it in real time. Its platform processes roughly 60 billion machine learning predictions per day across a data infrastructure spanning 180 petabytes, serving nearly 100 million consumers and 10 million small and midmarket businesses. The strategy is already translating into growth. In its most recent quarter, Intuit reported $4.7 billion in revenue, up 17% year over year, with operating income rising 44% on a GAAP basis. The company says its platform facilitates close to $890 billion in money movement and $336 billion in payroll annually. Under Balazs, Intuit has built what it calls its Generative AI Operating System, or GenOS, designed to coordinate models, data, and workflows into task-specific agents that can execute complex financial operations. Through partnerships with OpenAI and Anthropic, the company is also embedding those capabilities into external AI ecosystems while maintaining control over customer data. Still, the central question remains: If AI begins to function like an autonomous CFO, who is responsible when something goes wrong? Speaking with Fast Company, Balazs argues the answer is not full automation, but a new architecture of trust, and a rethinking of how human expertise fits into increasingly autonomous financial systems. This conversation has been edited for length and clarity. When AI agents are autonomously handling accounting, tax preparation, and cash flow, where do you draw the line between assistance and authority? And should businesses be comfortable handing over that level of decision-making to systems that are, at their core, probabilistic? The customer is always in ultimate control of critical decision-making and is provided with the needed data to help make those decisions. As we continue to build “done-for-you” experiences for customers on the Intuit platform, we’re creating capabilities and experiences where work is done for the customer on our AI-driven expert platform, with their permission. We’ve always put the power in our customers’ hands. This gives us a durable competitive advantage because it’s what matters most to customers when it comes to financial tasks is instilling complete confidence in their high-stakes financial decisions. Leveraging proprietary data, domain-specific AI platform capabilities and human intelligence, our system of intelligence uses deterministic domain-specific models built on decades of trusted proprietary data. Intuit Intelligence provides answers grounded in its own proprietary data and will take action on the user’s behalf, through automation and with a handoff to a trusted AI-enabled human expert. This is intelligence rooted in lived financial reality, not generic large language models. As the industry pushes toward full automation, why keep humans so deeply embedded in financial workflows? Where does that handoff actually happen, and what ensures the human layer remains a real safeguard, not just a symbolic one as AI improves? We’ve learned that for financial workflows, AI alone is not enough for confidence. Customers have a psychological need for a “data trail” back to the balance sheet. Our QuickBooks Live offering is growing alongside AI because human experts provide a “domain expert check,” showcasing the power of human intelligence. While AI handles the high-volume categorization, humans provide the “final mile” of context to ensure accuracy. Queries in our system of intelligence aren’t just searches. They hit a “conversational front door” that triggers our Generative AI Operating System (GenOS) to query proprietary data against live transaction data. We address the “confidence gap” through a “show your work” approach, providing a data trail back to the balance sheet and ensuring there are “no dead ends” by handing off complex tasks to live experts (e.g., tax, bookkeeping). One of the surprises we’ve seen in our system of intelligence: We expected accounting questions, but new-to-QuickBooks users are using AI to architect their entire business, even asking about warehouse organization and employee handbooks, for example. Rather than relying on automation alone, we are utilizing human review, oversight, and feedback to validate high-impact outputs, catch errors, refine model performance, and improve decisions over time. Intuit has marketed GenOS as the orchestration layer. But as the industry moves toward model-agnostic architectures, with partners like OpenAI and Anthropic, is the real moat shifting away from models to orchestration and data ownership? And if so, what stops that layer from becoming standardized or commoditized as competitors and cloud platforms build similar capabilities? We built our Generative AI Operating System (GenOS) to solve an enormous challenge: making generative AI broadly available for all product teams to develop solutions that integrate the technology safely and responsibly into applications on our platform. In today’s rapidly evolving tech world, our LLM-agnostic strategy gives Intuit technologists the freedom to choose from a catalog of best-in-class commercial LLMs (15+ LLMs, 70+ versions) and our own proprietary custom-trained Intuit Financial LLMs. GenOS includes embedded guardrails for security with protections designed to address risks such as prompt injection, data leakage, and harmful outputs, all within a broader responsible AI governance framework. The platform also uses standardized runtime and user-experience layers so teams can build, monitor, and improve AI features consistently, helping deliver more reliable performance and a stable experience at scale across products. Intuit operates across consumers, SMBs, and now the mid-market, while ERP vendors, fintechs, and cloud providers all push to own the enterprise AI layer. What is the platform’s key differentiator and real moat in this race? And as incumbents embed AI into their stacks and hyperscalers control the infrastructure, what prevents Intuit from getting squeezed in the middle as the market consolidates? We’re at the beginning of a new era of agent-led growth in financial services that represents a massive tailwind for Intuit in our next chapter. Service-as-software built on data, AI, and human intelligence is delivering solid double-digit revenue growth for Intuit with expanding margins and massive customer impact. This plays to Intuit’s platform advantage—and why we’re built for this moment. Our AI and human intelligence platform innovation is fueling Intuit’s growth and delivering significant customer benefits. We enable businesses to operate from lead to cash, and help consumers from credit building to wealth building, all in a regulated environment. We aren’t just “using data,” we are grounding queries in 625,000 financial attributes per business and 24,000 bank connections on our platform. And as we scale, the business model strengthens: the more customers we engage, the more insights we gain, which improve recommendations, outcomes, and value for every customer. This creates a powerful network effect that reinforces our competitive advantage. You’re running 60 billion predictions a day on deeply sensitive financial data, yet even the best models can hallucinate or make errors. How do you reconcile that tension between near-perfect accuracy requirements and inherently imperfect systems? Who is ultimately accountable when an agentic AI-driven financial decision goes wrong? Our platform deploys multiple advanced technologies that draw on our large and relevant data sets designed to help ensure we’re delivering accurate answers to customers and mitigating the risk of hallucination or other types of inaccurate or inappropriate answers. When our AI provides an answer or gives guidance to a customer, it’s drawing on the deep expertise that Intuit has developed over many years, plus the data that gives us a 360-degree view of the customer. This helps make sure the answer given is relevant and grounded in the customer’s own data. The company has taken a firm stance on data sovereignty, keeping customer data within Intuit while still embedding capabilities into ecosystems like OpenAI and Anthropic. How do you balance that openness with control? And if models increasingly become the primary interface, is there a risk that the platform layer gets abstracted away despite those safeguards? Customers are establishing relationships with AI tools such as ChatGPT and Claude, and we want to show up at their point of need. Consumers and businesses using Intuit capabilities within these tools get personalized insights and recommendations powered by the platform to take certain actions. We want to be where our customers are and continue to own the customer relationship and data. Security and privacy are within our platform, and we selectively apply user data at the user’s request to power trusted, accurate responses in ChatGPT and Claude when a user is logged into their Intuit account. If AI agents take over execution, finance teams inevitably shift from doing the work to supervising it. In your view, what does the future finance organization actually look like? Are we heading toward a world of AI auditors and system overseers, or is there a risk that over-automation erodes financial intuition and literacy in ways we don’t yet fully understand? AI is already contributing to significant growth in the finance industry. This is especially apparent with data-driven digital brands—approximately 92% of companies that use AI in finance say they’ve either met or exceeded ROI expectations. AI is redefining how teams and organizations run and compete. As the role of AI in finance evolves, there’s a clear shift toward intelligence-driven finance operations. Long-term success, though, will depend on balance. Industry leaders must still find ways to leverage human talent if they want to thrive. At the same time, they’ll need to build internal systems that emphasize accountability and responsibility. View the full article
-
Why Timothée Chalamet is wrong about opera’s place in our AI-ravished world
Timothée Chalamet drew widespread condemnation when he implied that opera is a dying artform, and said that “no one cares” about the medium anymore. It was a dumb thing to say. And it’s also wrong. Opera, like most performing arts, is still recovering from the pandemic. But the industry as a whole is actually growing–dramatically. Globally, opera is worth $3.4 billion, and is expected to grow to $5.33 billion over the next few years. First-time attendance has more than tripled since 2021, as more young people head to the opera house. And opera’s resurgence is part of a bigger trend; in multiple ways and across age groups and formats, people are turning away from the digital and towards the analog. In a world ravaged by AI, people increasingly want things they can touch, own, and experience. They want reality, with all its messiness and drama. Bring in the Jester I saw this firsthand when I attended a performance of Rigoletto at the San Francisco Opera last year. San Francisco is the most AI-obsessed place on earth. It’s the kind of city where one company put up entirely unironic billboards urging people to “stop hiring humans”, while another responded with competing ones suggesting that AI robots would sleep with your daughter. At the performance of Rigoletto, San Franciscans indeed rolled up to the 1930s-era War Memorial Opera House in AI-driven Waymos, and stood in the lobby snapping photos on their iPhones (almost certainly with AI filtering) for sharing to every social platform known to man. But once the audience filed into the main auditorium, all the tech immediately vanished. Anyone who so much as glanced at their phone risked being hissed at by angry neighbors (booing, hissing, and shouting “Bravo” are apparently all still things in the world of opera.) One guy made the mistake of trying to take a photo as the show started, and another theatergoer waded through an entire row of audience members to personally yell at him. This, in other words, was an entirely analog space–from the socially enforced norms of the attendees to the performance itself. As a total novice to the opera, I was shocked to learn that opera performers generally aren’t amplified. They fill a cavernous, multi-story auditorium using only the power of their voices. Save for the presence of some modern touches—like translated subtitles above the stage for people who don’t speak 19th century Italian, and a guy in front of me dressed in head to toe leather (this is still San Francisco)—you could squint and think you’d been transported back to 1832 when Giuseppe Verdi wrote the show. And as a form, opera has plenty in common with the grabbiest content of today. If you think the AI slop videos churned out by Sora and Veo are dramatic, you’ve clearly never seen Rigoletto. There’s kidnapping, cuckolding, magical curses and (spoiler alert!) child murder. Opera even has memes, in the form of earworm musical phrases that have survived generations. I guarantee you’ve heard the signature aria of Rigoletto even if you’re as ignorant of opera as Chalamet. And once I remind you of it (Da-da-da-DUMPA-dum, Da-da-da-DUMPA-dum), you’ll have it in your head for a week (sorry). Technology may have changed. But when it comes right down to it, the things humans find engaging (surprise, scandal, catchy music and a good story) were pretty much the same 200 years ago as they are today. An Analog World Opera is growing because it delivers those timeless, very human things in a medium that doesn’t require sitting alone in a dark room, hunching over a tiny metal square while unseen computers in a building in Minnesota churn through gigawatts of electricity to keep a deluge of content continually flowing to your brain. And opera is hardly alone in its ascendence—as AI eats the world, anything analog is suddenly on a tear. Vinyl is now a billion-dollar industry, and even cassette tapes are seeing a resurgence. In my own town, our local vintage vinyl store got so popular that they had to move from their quirky little storefront to a big-box space the size of a Best Buy. Fed up with AI-powered dating apps, young people are turning to matchmakers and in-person speed dating. And so-called “Grandma Hobbies” like knitting, crochet and cooking–really anything that doesn’t require a screen and a mainline to a data center somewhere out in the ether–are suddenly on the rise. Even app-free flip phones are back in vogue. Chalamet and other people carrying the banner of popular culture would be well advised to take note. The world of analog performance and connection isn’t fading in relevance–it’s surging. Yes, people may be outsourcing much of their work–and an unhealthy amount of their decision making–to chatbots and AI agents. But handing all these things off to a computer frees up space. And it turns out, what people want to put in that space isn’t more tech. It’s a set of darning needles, a gently-spinning LP–or a tenor in a leotard standing in the spotlight, belting out arias. View the full article
-
Google March 2026 core update rolling out now
Google released the March 2026 core update today, the company announced. This is the first core update from Google in 2026, and follows the quick March 2026 spam update from a couple of days ago. It also follows the February 2026 Discover update. What Google is saying. Google updated its Search Status Dashboard to state: “Released the March 2026 core update. The rollout may take up to 2 weeks to complete..” Google added on LinkedIn: “This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites. The rollout may take up to 2 weeks to complete..” About core updates. Core updates roll out several times each year. They introduce broad, significant changes to Google’s search algorithms and systems, which is why Google announces them. Google also releases some smaller, unannounced core updates. It has been a long time since the last core update. While many expected Google to roll out core updates more frequently, that didn’t happen. What to do if you are hit. Google did not share any new guidance specific to the December 2025 core update. However, in the past, Google has offered advice on what to consider if a core update negatively impacts your site: There aren’t specific actions to take to recover. A negative rankings impact may not signal anything is wrong with your pages. Google offered a list of questions to consider if your site is hit by a core update. Google said you can see some recovery between core updates, but the biggest change would be after another core update. In short: write helpful content for people and not to rank in search engines. “There’s nothing new or special that creators need to do for this update as long as they’ve been making satisfying content meant for people. For those that might not be ranking as well, we strongly encourage reading our creating helpful, reliable, people-first content help page,” Google said previously. For more details on Google core updates, you can read Google’s documentation. Previous core updates. Here’s a timeline and our coverage of recent core updates: The December 2025 core update was on Dec 11 and ended on Dec. 29. The June 2025 core update was on June 30 and ended on July 17. The March 2025 core update was on Mar. 13 and ended on Mar. 27. The December 2024 core update was on Dec. 12 and ended on Dec. 18. The November 2024 core update was on Nov. 11 and ended on Dec. 5. The August 2024 core update was on Aug. 15 and ended on Sept. 3. The March 2024 core update was on March 5 and ended on April 19. Why we care. With any core update, we often see significant volatility in Google search results and rankings. These updates may improve visibility for your site or your clients’ sites, but some may experience fluctuations or even declines in rankings and organic traffic. We hope this update rewards your efforts and drives strong traffic and conversions. View the full article
-
How Delta turned TSA chaos into a brand advantage
As brand obsessions go, our collective love/hate relationship with airlines may be one of the most passionate and unique. It’s a perfect storm of time pressure, cost, emotional stakes, and a complete lack of control as a customer. An airline’s product is the experience, and that experience has a laundry list of potential pain points—check-in, lost luggage, boarding, seat comfort—that can ruin the entire thing. Now, the U.S. government is throwing a shutdown-size wrench into the mix. Due to a partial government shutdown, funding for the Transportation Security Administration has been paused. TSA workers have not been paid for more than a month, leading to staffing shortages at some airports. As both sides of the aisle point fingers and try to find a compromise, line-ups at airports are snaking so long that many airports have stopped even trying to post estimated wait times for travelers. On March 25, acting TSA head Ha Nguyen McNeill told Congress that air travelers are experiencing the highest wait times ever under the TSA. What do you call the opposite of a brand halo? A brand anchor? That’s what this entire situation is for airlines, since their customer experience is tied directly to this shutdown funding fallout. One airline, however, decided to step up to distinguish itself in a way we haven’t seen a brand do in a long time. On March 24, Delta announced that it was suspending its “specialty services” perk for U.S. Senators and Representatives, which gave those government employees high-touch service like escorting them past lengthy security lines and a dedicated check-in experience. “Due to the impact on resources from the longstanding government shutdown, Delta will temporarily suspend specialty services to members of Congress flying Delta,” the airline said in a statement to Fast Company. “Next to safety, Delta’s no. 1 priority is taking care of our people and customers, which has become increasingly difficult in the current environment.” The move follows what an “outraged” Delta CEO Ed Bastian told CNBC last week. “It’s inexcusable that our security agents, our frontline agents, that are essential to what we do, are not being paid, and it’s ridiculous to see them being used as political chips,” Bastian said. It’s a great example of a brand picking a perfect moment to speak out. But if you’re hoping this signals a return to corporate leadership having a backbone when it comes to issues that impact their customers and employees, you’re going to be like folks at the airport: waiting a long time. Purpose popularity There was a time in the not-so-distant past that brands were lining up to say something about an issue—any issue, really. It was all black Instagram squares this, and Stop Hate for Profit that. In 2017, tech execs like Meta CEO Mark Zuckerberg and Google CEO Sundar Pichai spoke out against President The President’s proposed “Muslim ban,” which limited travel and immigration from predominantly Muslim countries. Around that time, it was often seen as a brand risk not to speak out on certain issues. When Disney’s then-CEO Bob Chapek stayed silent about Florida’s Parental Rights in Education bill (better known as the Don’t Say Gay bill) in 2022, he was widely criticized by employees until he apologized and the company announced it was formally opposing the bill. But leading up to the 2024 election, maybe even starting with the Bud Light/Dylan Mulvaney controversy in April 2023, brands and corporate leaders have avoided taking a stand or speaking out on any issue even remotely considered political for fear of landing in MAGA crosshairs. This coincided with a broader shift in sentiment that saw the majority of these corporate stances as largely performative marketing moves, as opposed to real values. The last couple of years, corporations have remained mostly silent, save for a few recent examples. (In January, a collection of more than 60 CEOs of Minnesota-based companies wrote an open letter that called for “an immediate de-escalation of tensions,” during ICE’s occupation in the city.) Pick a moment According to recent Ipsos Consumer Tracker data, 56% of Americans say brands should remain neutral on political issues today, down from 63% last year. And 57% believe that, if a brand takes a stance, they should stick by their decision, regardless of consumer backlash. Of course some issues are less political than others. Delta coming out against the shutdown to help spur a solution is about as neutral as possible. No one likes long lines. Taking away the special perks afforded members of Congress and the Senate is just this side of performative, and should be paired with giving something back to its everyday customers. Even if it’s just a free drink. The advantage here as a brand is to be agile and open enough to pick your moment to stand out. We’ve seen it work in the past. Back in early 2009, Hyundai did just that amid the 2008 financial crisis. The carmaker’s “Assurance” Super Bowl ad promoted its program that allowed buyers to return vehicles if they lost their income. It drove a 59% jump in brand consideration and helped boost Hyundai’s market share from 3.1% to 4.3% in early 2009. Despite a 22% industry-wide sales drop in September 2009, Hyundai’s sales rose 27%. Hyundai took advantage of a consensus issue—the financial crisis—that was out of its control, by finding a way to help customers that it could control. Right now, Delta has the first mover advantage on this issue, if it chooses to take off. View the full article
-
How GM is shaping the future of car design, one Corvette at a time
I’m standing in a showroom at the new General Motors design headquarters outside of Detroit resisting the urge to reach out and touch something. In front of me, there’s a Corvette CX, a one-of-one experimental sports car that the automaker has meticulously handcrafted to look both silky smooth and fast as hell. As I crouch down to see just how low this low-riding car would drive, the roof of the Corvette CX lifts up in front of me and opens like the cockpit of a multimillion-dollar fighter jet. The robotic precision of the sculpted body opening up is pure spectacle atop the shock-and-awe of the car itself. GM designed this all-electric “hypercar” to be action-movie-ready. It’s capable of running on regular roads and high-speed racetracks, with 2,000 horsepower coming from individual motors for all four wheels. The skeleton chassis and interior structure are made of ultralight carbon fiber. Wind-turbine-like fans draw air through the open-channel bodywork. And just when a tight curve might jar the nerves of the whitest-knuckled of drivers, an adjustable rear spoiler optimizes aerodynamics in real time. The Corvette CX is an ostentatious tour-de-force of advanced engineering, design, and manufacturing that took a team of hundreds three years and undisclosed millions of GM’s nearly $70 billion market capitalization to create. So it’s a strange feeling, standing next to this singular vehicle, to be one of only a relatively small number of people who will ever actually see it up close. This is the curious condition of the modern concept car. Long past the prime of in-person auto shows where members of the car-buying public would gawk at futuristic prototypes, the concept car of today sits physically in near isolation, more an image for social media than a social experience. Concept cars are both more and less visible now, and their long-established brand-building purpose is in question. But as visions of the future, they are increasingly important crystal balls. During my recent visit to GM’s main design facilities, it was clear that concept cars like the CX are more than just sneak previews for thirsty car collectors. With growing competition from emergent automakers in China, the on-again-off-again embrace of electric vehicles in the U.S., and a long tail of industry-wide uncertainty connected to the The President administration’s tariffs, the automotive industry is in one of its most dynamic periods in recent memory. Concept cars like the CX offer car designers a concrete aspiration for what they and the company want the future of cars to look like. “If you don’t create the beacon,” says Bryan Nesbitt, GM’s new senior vice president of global design, “you just spin and spin and spin.” A vision of the future These conditions explain why, depending on how you count, GM released three or four versions of a concept Corvette in 2025 alone. Under the watch of Michael Simcoe, the recently retired GM design chief, the company embarked on a multi-studio design effort to create new visions for the venerable Corvette sports car brand, which first launched in 1953. Simcoe called on three separate GM design studios around the globe to reinvent the Corvette for the age of waning internal combustion engines, increasing electric power, and not-so-distant autonomous driving. The first to be made public came from a recently opened studio outside Birmingham, England, which revealed an all-electric version of the famed muscle car with a sharp Batmobile nose, a smooth Shinkansen windscreen, and bulbous fenders. Another version was developed at GM’s Advanced Design studios in Pasadena, California, with a more snakelike appearance and street-racing vibe. The jet-age concept I saw up close at GM’s suburban Detroit campus, named the CX, was also adapted into a frighteningly powerful hybrid electric twin-turbo V8 race car. Painted with a bright yellow racing livery and equipped with a specialized steering wheel ready for extreme, possibly unwise speeds, it’s co-branded with the video game Gran Turismo. These four concepts, while not wildly different from one another, suggest a range of possible new directions for one of GM’s most valuable brands, covering everything from the exterior contours to the materials in the chassis to the audible rumble a muscle car should make when it doesn’t even have an internal combustion engine. For GM, Corvette concepts have become rare and strategic milestones in a business that primarily revolves around the incremental improvements of the model-year marketing approach. Previous Corvette concepts came out in 2009, 2002, and 1992, and each went on to influence one of the eight generations of production Corvettes sold to the general public, as well as car design writ large. The 1992 concept included an early example of a rearview camera, now essentially a standard feature in new cars. The 2002 concept had a carbon fiber engine bay, testing lighter structural materials to boost performance. The 2009 concept’s design leaned flashy, with scissor doors and a cockpit-like interior, but did arguably more as a brand-building tool when the car was featured as one of the main characters in the 2009 movie Transformers: Revenge of the Fallen. Each concept is a one-off drivable piece of confidentially expensive R&D. The four Corvette concepts released in 2025 are no different. Standing next to the CX in the executive showroom at GM’s Design West building, Phil Zak, executive design director for the Chevrolet brand, assures me the car is wholly a conceptual project. GM did have a period in the late 1980s and early ’90s when the production vehicles that went to market looked almost indistinguishable from the cars the company had put out as concepts a few years prior. But Zak says the CX is by no means a preview of C9, the ninth-generation Corvette that is rumored to debut with its first model in 2029. Undoubtedly there’s a connection, though; the CX and the three other new Corvette concepts will influence “the formal development from an interior and exterior perspective,” Zak says. “It is the spiritual guide for where we’re going with C9.” The concept project is also a chance to test out those future models of Corvettes, likely several years’ worth, before green-lighting their production. Simcoe says investing in the concepts gives GM something tangible to put before potential buyers as a way to gauge their interest in what could soon be on display in a dealership showroom. “There is still a buzz that you get from being up close and personal with a really cool design,” Simcoe told me before his retirement last July. “Our object is to create that visceral reaction with customers—with people who are in the presence of these physical things, because that’s what we sell.” The future-looking design work happening here has ramifications not only for how many people will want to buy a given car, but also for how it gets manufactured, with what materials, through what means, and for what potential end-user experience. These advanced designs and concept cars help inform a diverse range of third-party suppliers and manufacturers involved in making the raw bones of a vehicle, the technology that powers it, and the surfaces and interfaces its drivers touch. A space for creation Sliding through a basement door inside the bowels of GM’s half-million-square-foot design complex, Nesbitt leads me into what may be the most pristine auto mechanic shop in the Motor City. Inside are four equally pristine vehicles that just happen to be some of the company’s most famous concept cars. One, the single-seat Chevrolet Engineering Research Vehicle of 1959, or CERV I, looks like a rocket from an early sci-fi movie; its horsepower influenced a generation of race cars. Next to it is the 1988 Pontiac Banshee concept, a devil-red arrow of a sports car with an early head-up digital display and navigation system. Down the line is the shining silver 1959 Stingray Racer, a sleek, highly contoured open-air race car that would evolve into the second generation of Corvettes that started selling in 1962. Among these flashy and audacious cars is what might appear to a layperson as little more than a mid-to-high-end convertible from the 1940s, with chrome accents, bulging wheel fenders, and a tail that curves gently downward. It’s the kind of car that would be parked at Makeout Point in an old black-and-white movie. But despite an appearance that seems ordinary in retrospect, this car was revolutionary for its time. The car is the 1938 Buick Y-Job, the first concept car ever created by the auto industry. It was a project of GM’s first head of design, Harley Earl, a towering figure who is credited with bringing car styling and design to the automotive industry in the late 1920s. “Before that they were really just construction operations,” Nesbitt says. Earl used the Y-Job as a real-world testing ground for integrating new design approaches and emerging technologies into an everyday car. Among the Y-Job’s innovative ideas were hidden headlights, electric windows, and flush door handles, as well as its forward-leaning profile. It was an anomaly compared to the other boxy and bulky cars on the road in 1938, but by the mid-1940s it had set a new standard in aerodynamic forms and expressive detailing. Unlike the concept cars to come, the Y-Job was not really a marketing tool. Aside from being an internal prototype, it was also a company car. Earl used it for daily driving, commuting from his home in tony Grosse Pointe to GM’s headquarters in Detroit, drawing curious looks along the way. The odometer reads 25,890 miles, making it more akin to a used car than a showpiece. “Earl was gauging reaction, but he was doing it in a very organic way,” says Christo Datini, manager of GM’s archive and special collections. In Earl’s day, building out a concept vehicle was a scramble. The Y-Job was built on the chassis of a 1937 Buick, with bespoke parts and one-of-one components made by workaday tool shops and machinists temporarily diverted from the relentless demands of the assembly line. Today, GM and some of its biggest competitors have dedicated spaces where conceptual designs can be transformed from drawings to scale models and full-size vehicles all within one facility. Leaving the Y-Job behind, Nesbitt walks out of the mechanic’s shop and across a corridor to badge through another secured door. We enter a bright, buzzing workshop where more next-gen concept cars are being built by hand. The shiny metal innards of car parts sit on rolling carts amid a team of six mechanics crowded around the skeleton of a concept car up on jacks. They’re manually connecting hand-built door components onto the car, a four-seat Cadillac concept named Elevated Velocity. It’s been specially designed to switch between autonomous driving and human control, with a deployable steering wheel that pops out only when requested. Nearly every part of the car, from its thin contoured seats to its bold gullwing doors, was fabricated in-house. “The only thing we outsourced was the tires,” Nesbitt says. Part of this is for secrecy’s sake. Concept cars are obvious targets for corporate espionage, with competitors sometimes nefariously eager to know what’s being developed by the other team. But Nesbitt says doing all this work in-house is its own form of knowledge building, with designers, engineers, and fabrication specialists working together to understand how to turn an idea into reality, and whether new processes or tools need to be created to make a door system or roof canopy feasible to manufacture. “It’s the flexibility of having this all in one place that gives us operational efficiency,” Nesbitt says. “When you outsource, you don’t get any of that value.” The idea makes intuitive sense, and it’s one that other automakers are onto. Ford’s new headquarters building, in another Detroit suburb, also brings design and fabrication under one roof, with designers able to bring a full-size clay model from a fabrication shop up an elevator and into their design studio to check lines or rethink proportions. GM’s design facility, including the new 360,000-square-foot Design West complex that opened in 2024, is configured for open collaboration across departments: Designers and engineers sit within close view of production cars getting their final touches and more experimental designs still coalescing. “You’ve got a whole bunch of people who are inside of the future,” says Simcoe. “They’re working with the future vision in their minds or in their sight as they’re doing production vehicles and coming up with production solutions.” This is where the designers behind the Corvette CX concept developed the car’s look, feel, and material choices. Working alongside other specialists from across the company, they found they could reliably use carbon fiber for elements of the car’s structure and suspension. It’s also where they carved out several scaled-down potential designs in clay models, gleaning lessons and gathering feedback along the way. It’s a process that helped quickly narrow down what could be built with existing means, what couldn’t, and how GM could start to influence its suppliers to create the kinds of parts and components it expects to need years down the line. Blurring high-end design, advanced engineering, and futuristic manufacturing, the conceptual work happening here has ramifications beyond GM to the larger auto industry. Old guard vs. new guard On the floor of the Detroit Auto Show in January, the Corvette CX concept sat parked on a small, spare stage, cordoned off from anyone trying to get a touch, its signature lift-up cockpit sadly closed. Amid the audible and visual noise of the show—dozens of automakers displaying further dozens of cars, four screeching test tracks, and a hodgepodge of the obscure component suppliers who build many of the bones and brains of modern cars—the CX could be easy to miss. In the not too distant past, a concept like this would have been given star treatment, with a sparkling reveal on a grand stage, making it a central draw for visitors. Now the concept car in general is a sideshow, offering some visitors a quick photo opportunity and others a momentary double-take before they move on to see the real cars they might actually want to buy. That seems to be the true purpose of the car show today—giving consumers the equivalent of a suburban auto mall concentrated into the space of a convention center. “Concept cars were once part of a big collective celebration that was predictable and toured around the world. That seems to be on the decline,” says Raphael Zammit, an associate professor of automotive design at the College for Creative Studies in Detroit, who previously worked on concept and production cars for Porsche, Hyundai, GM, and others. “They’re just too expensive these days. I think the return on investment becomes a question mark.” The question of relevance was on Simcoe’s mind when he was developing the idea for the multi-concept Corvette project. After 42 years with GM, he’d seen the concept car evolve, and watched in recent years as conventional avenues for releasing new concepts and ideas began to disintegrate. “The buzz was directly from traditional media, and people carrying the message through word of mouth. Now it’s instantaneous,” he says. Social media and digital coverage have come to the forefront, making the actual tangible concept car almost superfluous. “Unfortunately, it doesn’t allow as many people to have that physical interaction with the concept. But probably, in the end, more people see it.” GM has experimented with gathering feedback on concepts before a physical model has even been considered. Recently, the Buick brand developed a purely digital concept called the Electra Orbit, publishing the images primarily in China, the market GM was targeting. “It got a lot of international attention,” Nesbitt says. And that’s increasingly important, as competition in the auto industry is much more diverse than in the heyday of Detroit’s big automakers. Chinese carmakers are surging, with brands like BYD and Geely using state support to expand rapidly and carve their way into foreign markets. Big investments in technology and manufacturing, especially on the EV front, are helping some of these companies leapfrog more established auto manufacturers. That puts the onus on the old guard to stay fresh. Part of GM’s approach has been to build on the strength of its history, playing up fan-favorite brands like Cadillac and Buick, and tentpole models like the Corvette. As a result, the concept car is taking on a new level of importance for the company. “It’s evolving more than it ever has,” Nesbitt says. The case for physical models While the world simultaneously embraces the visual prowess of artificial intelligence and wades through a downpour of AI slop, the view from the clay-spattered floor of GM’s design facility is that the case for physically building a concept car is still strong. The Corvette concepts of 2025 may prove to be a flag in the ground for pushing new ideas and getting physical. For Nesbitt, who started out in the automotive industry in the early 1990s hand drawing designs with ballpoint pens on paper snagged from the office Xerox machine, the physical concept car remains a powerful creative tool. Walking down one of the impressively long hallways in GM’s new design building, he stops at a car that’s on display. It’s a four-seat Cadillac concept from 2024, finished in an unexpected pastel yellow. The interior is accented with wood—another unexpected choice in an industry deeply in love with chrome (Nesbitt says the unusual interior reshaped the car as it was being built). In a late version of the concept’s design, the designers wanted to offer the rest of their collaborators and the company executives a better view of the unique interior and its potential to reconfigure future cars, so they cut the roof off. Beyond the interiors, everyone was taken by the now-topless exterior form. “So it’s a convertible now,” Nesbitt says. These kinds of design changes aren’t impossible to make in the purely digital space, but Nesbitt says there’s undeniable value in being able to take a visual idea and make it physical. Granted, most of the conceptual design work done by GM going forward will never leave the confines of a computer screen. But that’s just a testament to the growing powers of digital design technologies that allow much of the refinement to happen on-screen, from aerodynamics to regulatory compliance. “The tempo of technology integration is increasing. AI compute power is rising,” Nesbitt says. But advances in technology won’t be enough to guide a company like GM through the current era of drastic change facing the automotive industry. Concept cars—testing grounds from the beginning—will continue to offer tangible guides for how the industry evolves, from design and engineering to manufacturing and marketing. Nesbitt is just the eighth person to hold the reins of design for GM in its 118-year history, and he doesn’t want to be the last. Concept design, with its changing tools and roles, will continue to be part of determining the future of GM’s products, he says. They’re visions for cars, yes, but he argues they’re also visions for where the entire company, and maybe the entire industry, will go from here. He concludes, simply, “You’ve got to identify the what before you identify the how.” View the full article
-
Why we need to rethink scale
In 1966, Bruce Henderson, the founder of the Boston Consulting Group, articulated what would become one of the most influential ideas in the history of business strategy: the experience curve. Its origins date back to T. P. Wright’s original 1936 paper, “Factors Affecting the Cost of Airplanes.” Wright discovered a relationship between the cumulative production of a physical good and the costs associated with producing it. The breakthrough was that you could predict your future cost structure in a way competitors couldn’t. In 1966, BCG did a major study for a semiconductor firm and made a similar discovery. As Martin Reeves describes it, they found “that a company’s unit production costs would fall by typically 20 to 30 percent in real terms for each doubling of “experience,” or accumulated production volume. This had a profound effect on how companies thought about building cost advantage and pricing their output, specifically moving aggressively to grow in the early years of an industry’s formation, so that they could establish a cost advantage which later entrants would be unable to match, in turn yielding a sustainable competitive advantage.” The Growth / Share Matrix This led to one of the most famous strategy frameworks ever created, the BCG growth/share matrix. It followed experience curve logic. If you had a large market share in a high-growth industry, you had the potential to win, and win big! If your relative market share was high (compared to other firms), and the growth rate of the market you were participating in was also high, those businesses were “stars.” They would justify investment. If you had a high share, but the market was growing slowly, those were deemed “cows.” As the nickname suggests, cows were there to be milked, but not to command investment. If the market growth rate was low and your share was low, these businesses were “dogs,” and were candidates for divestment. And to this day, nobody ever figured out what to do with the question marks (or problem children), businesses with low shares but high growth rates. The idea took off like wildfire, for a management framework, anyway. By the 1980s, according to surveys and observations, roughly half of all Fortune 500 companies were using it to allocate resources across their portfolios. Henderson’s insight shaped much of management thinking of the time: the rise of the conglomerate, the dominance of the Fortune 500, the global disposal of entire sectors (to the point at which American companies could no longer compete in consumer electronics, and a whole lot of merger & acquisition activity as a market for market share emerged in its own right. While later empirical research challenged the validity of the matrix (my personal favorite is a piece called “The Product Portfolio and Man’s Best Friend”), it had an outsize impact on executive thinking for decades, particularly emphasizing the importance of scale. In a dematerializing economy, conventional views of scale aren’t relevant As more and more of the value of a modern large firm consists of intangible assets, the taken-for-granted value of scale is questionable. A revealing metric is revenue per employee. In the industrial economy, a high-performing company might generate $200,000 to $500,000 per employee annually. That number was treated as a kind of ceiling—you could push it higher with efficiency programs and automation, but the basic ratio of output to headcount was relatively stable. Digital businesses are rewriting the rules. Cursor, the AI-assisted software development tool, reached $500 million in annual recurring revenue with fewer than 50 employees—roughly $10 million per person. Midjourney, which has never taken a dollar of external funding, crossed $200 million in revenue with approximately 40 people. A startup called Base44 sold for $80 million six months after its founding, built entirely by one person. The experience curve assumed that accumulated volume drove down costs. But if your primary input is intelligence rather than labor, and intelligence is available on demand at near-zero marginal cost, the curve collapses. A small team with the right AI can match the output of an organization ten times its size, at a fraction of the overhead. What Scale Actually Bought To understand why this matters strategically, it helps to be precise about what scale was actually purchasing in the old model. Scale bought four things: production efficiency, through the experience curve; market power, through the ability to invest in distribution, marketing, and sales infrastructure that smaller competitors couldn’t afford; talent aggregation, through the ability to attract specialized people by offering stability, resources, and career development; and organizational resilience, through redundancy and the ability to absorb shocks that would destroy a smaller organization. AI is substituting for all four, to varying degrees. Production efficiency no longer requires accumulated volume when a single developer with AI tools can write code, design interfaces, and manage infrastructure that would have required twenty people five years ago. Market power through distribution is being disrupted by AI-driven content, community-based growth, and platforms that allow small producers to reach global audiences without large sales organizations. Talent aggregation is becoming less critical when AI agents can perform an expanding range of specialized tasks. And organizational resilience, while still a genuine advantage of large organizations, matters less when smaller organizations can operate with lower fixed costs and therefore survive disruptions that would force layoffs and restructuring at larger rivals. The new barriers to entry? The evidence suggests three durable new “moats” that take over from scale. The first is proprietary data. A company that has accumulated unique, hard-to-replicate datasets has an advantage that AI amplifies. The more powerful the AI tools, the more valuable the proprietary data becomes. This is why the most defensible AI businesses tend to be deeply vertical: they have access to data that generalist competitors cannot replicate. Companies as varied as Netflix, Spotify and John Deere keep their strong positions because only they have access to crucial information about their customers. The second is trust and relationships. In a world where AI can generate convincing content, proposals, and analysis at scale, the scarce resource becomes authentic human connection and trust. Customers of professional service firms, healthcare providers, and any business where the buyer is taking a significant personal or financial risk will continue to value relationships that go beyond what an AI-mediated interaction can provide. Edward S. Jones, Delta Airlines and Zurich Insurance have leaned into human interaction as a differentiator. The third is what we might call ecosystem position—the ability to sit at the center of a network of complementary actors whose collective value exceeds what any single participant could create alone. Platform businesses, community-driven products, and companies that serve as connective tissue in an industry retain meaningful advantages that scale amplifies rather than creates. Apple, for instance, has such a powerful ecosystem position that Google pays it around $20 billion to be the default search application on the phone. The New Disruption For strategists, the implication is uncomfortable but clear. In a world of what Scott Anthony calls “Epic Disruptions,” there are two mechanisms that lead to scale based advantages simply evaporating. The first is when something that used to be complex or incredibly difficult becomes easy. The second is when something that used to be expensive or inaccessible becomes affordable. Combine these two factors, and you have the modern manifestation of Clayton Christensen’s innovator’s dilemma, but today in increasingly digital form. The assumption that scale will save you is dangerous. View the full article
-
Google Begins Rolling Out March 2026 Core Update via @sejournal, @MattGSouthern
Google has started rolling out the March 2026 core update. The rollout may take up to two weeks to complete. The post Google Begins Rolling Out March 2026 Core Update appeared first on Search Engine Journal. View the full article
-
Social Security change capping benefits payments at $50,000 a year: Experts’ solution to the SSA going broke in 7 years
With Social Security on track to go broke in less than seven years, a new report from the Committee for a Responsible Federal Budget (CRFB) is proposing a solution: Cap Social Security payouts to $100,000 a year for couples, as part of an overall plan to save it from insolvency. (That’s $50,000 for a single retiree.) The renewed spotlight on Social Security follows a recent report from the Congressional Budget Office (CBO) that the main trust funds responsible for paying benefits, the Old-Age and Survivors Insurance Trust Fund, could be insolvent by as early as 2033. By law, that would automatically trigger a massive 24% cut in benefits. On top of the higher cost of living, including higher grocery and gas prices, this would mean a big financial hit for seniors. One reason the CBO is forecasting Social Security could go broke sooner than expected is that the Social Security Administration has had to increase COLA (cost-of-living adjustment) payments to keep up with inflation. SSA made a 2.8% COLA increase for 2026, and is projecting, on the high end, a 3.1% adjustment for 2027. The “Six-Figure Limit” The CRFB’s proposed “six-figure limit” (SFL) would take effect this year and establish a new maximum benefit for a couple retiring at the normal retirement age (NRA), adjusted based on marital status and collection age. (Retirees can start collecting benefits between ages 62 and 70, though the full retirement age is 67.) Currently, only the highest-income couples can collect $100,000 a year in Social Security benefits, which represents a small fraction of retirees. How would the Six-Figure Limit improve Social Security solvency? In short, the SFL would create small savings that could grow over time. Looking at a few different models, some changes could save $100 billion over 10 years, while others could close “20% of Social Security’s solvency gap and three-fifths of the 75th-year deficit . . . indexed to inflation,” according to the CRFB. View the full article
-
The Navy’s AI bet to fix its submarine bottleneck
The answer to America’s submarine bottleneck, the U.S. Navy has decided, lies as much in software as it does in steel. A new multibillion-dollar facility in Cherokee, Alabama, aims to harness AI and robotics to build submarine components faster and more reliably. The automated “factory of the future” will produce parts for the Navy’s Virginia-class attack submarines and Columbia-class ballistic missile submarines, both central to the U.S. fleet. It will cost $2.4 billion to develop. “This factory is the first of three facilities designed to address the most critical bottlenecks in the maritime industrial base,” said John C. Phelan, secretary of the Navy, in a statement. The bottleneck is significant: a shortage of labor. The project is a major public-private push to revive U.S. submarine manufacturing capacity through heavy automation, says Chris Power, founder and CEO of Hadrian, the company behind the Alabama facility. “In the U.S., just the submarine program alone is 70 million man hours in deficit,” Power says, noting that the gap traces back in part to the offshoring of manufacturing jobs in the 1980s and 1990s. “There aren’t that many skilled workers to hire.” The company’s answer is to layer in automation and AI. “We have to give the American workforce superpowers of AI and robotics to allow them to be more productive,” Power explains. He says the site will begin producing components and large subassemblies later this year, then ramp up over the following 18 to 24 months. The goal is to automate “80% to 90% of the key efforts that are really complicated.” That productivity push comes as the U.S. faces mounting geopolitical pressure, with demand for military hardware unlikely to ease. Automation may help close the production gap. It does less to solve what happens after deployment, when equipment breaks and needs to be fixed. That talent shortage extends beyond wartime scenarios. Cynthia Cook, senior fellow in the defense and security department at the Center for Strategic and International Studies, says the Alabama plant is part of a broader effort to rebuild a hollowed-out shipbuilding base. The U.S., she argues, can no longer rely solely on coastal shipyards. Inland factories can produce modules and components in regions where labor is more available. There will “still be a huge need for repair and sustainment,” Power says, adding that this work will remain “manual for a while, because it’s so nuanced.” Factory automation, in his view, should handle repeatable, high-volume production, while skilled workers focus on complex, higher-value repair. This tension is not theoretical. The USS Gerald R. Ford is currently undergoing repairs in Crete after a fire in its laundry area caused significant damage, with fixes expected to take a year or more. Cook agrees that maintaining repair expertise will remain critical. “When a ship comes in for maintenance, you really need a lot of folks who have tacit knowledge about ships, what can go wrong and how to fix things,” she says. “A repair skill is different from a new-build skill.” She believes the system will retain enough capacity to preserve that knowledge. Others are less certain. Automation can deepen labor challenges if it demands skills the current workforce lacks, especially without a coherent strategy for recruiting, retaining, and training workers, says Christophe Combemale, assistant research professor of engineering and public policy at Carnegie Mellon University. He is also concerned about how little visibility Washington has into the capacity and quality of U.S. manufacturing training pipelines. At the same time, Combemale does see some upside. “Some aspects of automation actually improve the longevity of this knowledge by making tacit knowledge explicit,” he says—warning, however, that without careful planning expertise could erode. “Are you making the labor shortage worse by demanding new skills that your incumbent workforce doesn’t have?” It’s a question that may only be answered when the system is tested—at which point it will become clear whether AI and automation have strengthened the industrial base or quietly reshaped its limits. View the full article
-
No Kings protest: March 28 rally goes far beyond America—it will be on 6 continents. Here’s who will be there
There are more than 3,100 events scheduled in all 50 states for tomorrow’s third “No Kings” nationwide protest. Musicians Bruce Springsteen and Joan Baez, actress Jane Fonda, and Senator Bernie Sanders of Vermont are among those slated to speak and/or perform at one of the events on March 28, in Minnesota’s capital city, St. Paul. Protests will also take place around the globe on every continent except Antarctica, organizers tell Fast Company. Springsteen will be singing his new political hit, “Streets of Minneapolis,” about President Donald The President’s deployment of Immigration and Customs Enforcement agents to that city. He and the E Street band will kick off their next tour, dubbed “No Kings,” in the city on Tuesday, March 31. Minneapolis has become the epicenter for protests against the The President administration’s immigration crackdown, which has led to chaos and violence there, including the shooting deaths of residents Renee Good and Alex Pretti by federal officers in January. Organizers say the third No Kings protest is on track to be one of the largest single-day nonviolent nationwide protests in American history, with millions of people saying, “No Illegal Wars, No ICE, and No Kings” to The President and his administration. A previous No Kings protest in June 2025 drew 5 million people to more than 2,100 events, and another in October drew 7 million to more than 2,700 events. Tomorrow’s mobilization is the next step in the growing grassroots coalition movement of teachers, unions, students, immigrants’ rights groups, and others, which is gaining traction in red and purple states. No Kings spreads to red states and districts “Our suburban events are up 40% from the first protests, and we are seeing double-digit growth in Idaho, Wyoming, Montana, and Republican Congressional districts including Senate Majority Leader John Thune’s district in South Dakota, and House Speaker Mike Johnson’s district in Louisiana,” Leah Greenberg, cofounder of Indivisible, one of the key organizers of the protest, tells Fast Company. Both districts are conservative strongholds in red states. Indivisible is also seeing more No Kings events planned in traditionally red areas, like Atlanta’s East Cobb, and Scottsdale and Chandler in Arizona, Greenberg adds. What makes this protest different from previous ones, according to organizers, is that now Americans are experiencing armed and masked ICE agents at airports; a war in Iran; and attempts by the The President administration, along with Republicans in Congress, to pass the Safeguard American Voter Eligibility Act, known as the SAVE Act, which could make it harder for people to vote. Boston, Los Angeles, Seattle, Albuquerque, and Washington, D.C., events Here are some of the speakers and performers expected to attend No Kings protests in a number of U.S. cities. Boston: Dropkick Murphys (performing) Massachusetts Governor Maura Healey Senator Ed Markey of Massachusetts Representative Ayanna Pressley of Massachusetts Los Angeles: Actor Jodie Sweetin R&B singer-songwriter Iman Jordan Kelley Robinson, president, Human Rights Campaign Seattle: Washington Attorney General Nicholas W. Brown Albuquerque: Stacey Abrams, former minority leader of the Georgia House of Representatives; head of the 10 Steps Campaign Representative Melanie Stansbury of New Mexico Alex Uballez, former U.S. Attorney for the District of New Mexico New Mexico Attorney General Raúl Torrez Washington, D.C.: Alexis McGill Johnson, president and CEO of Planned Parenthood Lee Saunders, president of the American Federation of State, County and Municipal Employees View the full article
-
Why Meta is building its high-tech South Carolina data center with an old-school material
In a greenfield industrial park in rural Aiken County, South Carolina, Meta is building a new $800 million data center that’s much like any of the other hyperscale data centers giant tech companies are scrambling to construct. Set on 300 acres with two massive data halls making up most of its 715,000 square feet of buildings, it’s the kind of gargantuan facility that has become the de facto built form of the race to harness the lucrative power of artificial intelligence. But past the sprawling data hall buildings, a comparably modest administration building has a unique design feature. Instead of the concrete and steel used in the data halls and countless other data centers around the world, the facility’s administration building is being made primarily of wood. A grid of honey-toned glulam mass timber beams and columns rise out of the dirt on site, and more wood tops the structure that’s currently under construction. When the data center becomes operational in Spring 2027, this wood-framed building houses the offices of the humans who will keep the data center operational. And though the majority of the facility will be built using the conventional concrete and steel approach most designers and contractors are used to, this wood-framed building offers a glimpse of a slightly more sustainable future for data centers. Mass timber is a material choice that has some clear upsides, especially when it comes to the negative optics of electricity-hungry, water-thirsty data centers. “Sustainably-sourced mass timber is a great fit for us because it has much lower embodied carbon than traditional materials like steel or concrete,” says Blair Swedeen, Meta’s global head of net zero and sustainability. (Meta has a goal of net zero emissions by 2030.) “Using mass timber helps us build in a way that’s better for the environment.” It also helps build in a way that can be much faster than building with conventional concrete and steel. Swedeen says using mass timber, which is typically prefabricated to the specifications of a project, can speed up construction timelines, saving several weeks. And with less overall weight than a conventional structure, the foundation for a mass timber building needs only about half as much concrete for its foundation. “The use of mass timber brought several positive changes to the project,” Swedeen says. The mass timber elements for Meta’s data center project were provided by Smartlam North America, a leading mass timber manufacturer in the still nascent U.S. market. Nick Waryasz, a senior mass timber specialist at the company, says mass timber has been mostly used in residential construction, but there’s been growing demand for more industrial uses. “The biggest draw for using timber in those instances has been the sustainability metrics of building with wood when it’s replacing steel and concrete, and having a team that has an interest in doing that, like some of these bigger tech companies,” he says. Amazon, for example, recently opened a mass timber delivery station in Elkhart, Indiana, which the company hopes to use as a proving ground for using wood in future industrial projects. A data center being built by Microsoft is also using mass timber for part of its structure. Other data centers, currently in a building boom, are likely to follow. And not just for environmental reasons. “I’ve had some early discussions on big industrial projects like data centers recently, primarily driven by the fact of how long lead times are for steel construction,” Waryasz says. “It’ll be over a year out to get any kind of steel structures on projects, when our lead times for similar projects might be six months.” For the highly competitive AI industry, speed to market for data centers is increasingly important. That’s why Meta founder Mark Zuckerberg announced back in July that one way it was accelerating data center rollouts was by using easily-built large-scale fabric tents. Mass timber could be a slightly slower but more permanent alternative. Mass timber could also help soften the harsh image of some of these hyperscale facilities. “It brings warmth to things that sometimes are inherently cold,” says Caroline Dauzat, fourth-generation owner of Rex Lumber, which provided the raw timber that Smartlam used to manufacture into structural elements for Meta’s project. She says mass timber represents only a tiny percentage of what her company’s wood is used for, but industrial projects could lead to growth. “It’s a marketplace to create more demand for lumber.” Smartlam’s Waryasz says the mass timber industry is maturing to the point where industrial projects like data centers may opt for mass timber products automatically. “If they continue at their trajectory or anything close to it, it might even just become a supply question, with timber for construction being relatively abundantly available,” he says. Meta’s use of mass timber on the data center project in South Carolina is just a small portion of the facility’s massive footprint, but future projects may embrace the material in a bigger way. “We’re continuing to actively explore mass timber not only in our administrative buildings but also in warehouses and critical data halls, the spaces that house servers,” says Meta’s Swedeen. “Mass timber’s strength, durability, and fire resistance makes it a promising candidate for broader applications within data center infrastructure and we continue to evaluate these opportunities.” View the full article
-
Scientists have designed a way to save our brains from fake AI videos
Visual truth is going down in flames, thanks to new generative AI models that produce synthetic media that looks indistinguishable from reality. But a team of university researchers has figured out a hardware fix that just might save us. Engineers at ETH Zurich have designed a working prototype of a camera that physically stamps a cryptographic seal of authenticity onto every photo or video right at the image sensor (electronic chip) that captures each photon from the actual world. “Trust in digital content is eroding. We wanted to create a technology that gives people a way to verify whether something is genuine,” co-developer Felix Franke explained in a press release. This new hardware architecture fundamentally changes how we authenticate media. Right now, the tech industry relies on a standard called C2PA—Coalition for Content Provenance and Authenticity—which is already available on some devices, such as high-end cameras from Leica, Nikon, Fuji, and Sony’s Alpha line. It also recently hit the mobile market natively with the Google Pixel 10. This standard relies on the device’s main processor to stamp videos and pictures with a cryptographic seal that verifies their authenticity. When you see the picture or video in a C2PA-enabled player or on TV, the software can tell you it’s real. For example, if Meta enabled Instagram to read these C2PA labels, then a video in your feed could show that it’s trustable, just like your browser shows a little lock icon to indicate that there is a verified, secure connection with your bank. Here’s how the current solution works: The camera lens captures a scene, translates the light into digital information, and shoots it down an internal wire to reach the main computer chip. It is only after the data finishes that commute that the processor slaps a cryptographic signature on the file. But that tiny trip down the wire is a security liability. A sophisticated bad actor can intercept that internal cord, hijack the raw feed, and inject a completely synthesized video stream, producing a video that can be circulated as real. The phone’s main processor has no idea it is being lied to, so it blindly signs the fake footage, officially certifying any algorithmic hallucination as a verified fact. Would it be hard to do? Yes. But it is possible. ETH Zurich’s solution moves the security checkpoint directly to where the light enters, disabling the possibility of faking authenticity (unless you get Stanley Kubrick to direct your moon landing in a soundstage). AI With ETH Zurich’s chip, the researchers baked cryptographic circuits right next to the pixels that catch the light. The moment a photo is taken, the device instantly calculates a unique mathematical fingerprint of the captured reality. If you alter even a single pixel of the picture after this stamping happens, that fingerprint completely breaks. “If data is signed the moment it is captured, any later manipulation leaves traces,” research associate Fernando Cardes notes in the paper published in Nature Electronics. Once that fingerprint is calculated, a second circuit locks it using a private key—a secret cryptographic password permanently burned into the silicon. Because this private key is physically trapped inside the sensor’s architecture, it can never be extracted, copied, or intercepted by a hacker. The file is born secure before it ever moves a millimeter from where the light originally landed. To let the world know the footage is real, camera manufacturers would publish the sensor’s corresponding “public key” on an immutable public ledger, like a blockchain. Anyone can use that public record to mathematically verify that the video came from that exact physical chip and hasn’t been tampered with, enabling any device or player that understands this key to show this to any consumer. To forge a video, an attacker couldn’t just write clever malware; they would have to physically rip open the hardware and manipulate the microscopic circuitry of the sensor itself. Cardes notes that this requires such a massive technological effort that “the mass generation of manipulated content for social media platforms would be practically impossible.” The main roadblocks for the new ETH Zurich chip, compared with current C2PA solutions, come down to manufacturing scale and money. Unlike current C2PA implementation—which are deployable via software and firmware updates—the Swiss solution demands an entirely new hardware pipeline. The industry would have to redesign, retool, and manufacture new camera sensors with these crypto-circuits. The financial barrier for manufacturers to adopt it is the primary hurdle. “We are currently exploring how to reduce costs for camera and sensor manufacturers, should they wish to incorporate the new technology into their chips,” Cardes notes. Fighting insanity My thought about this: So what if it costs some pennies? This seems like the kind of hardware update that should be mandatory worldwide. Fabricated content is a danger to society—from the schoolyard to worldwide conflicts. Escaping from our current schizophrenic house of mirrors will require drastic action. And yes, some news organizations—like France Television and some CBC/Radio-Canada and BBC News content—are publishing C2PA visual content, but news organizations are not the problem here. Journalists already have strong fact-checking standards, and it’s extremely rare to see a reliable outlet publish false information. For them, this is a shield against those who will now use the generative AI card to claim that any visual evidence they don’t like is a conspiracy. The real problem is in the trillions of videos and pics that circulate through social media, allegedly captured in war zones, streets, college dorms, offices, and homes. And since we know C2PA can be hacked by bad actors, we probably need to go straight to a drastic, incontestable solution like ETH’s cryptographic sensors. I vote to put them into every single camera we own. Dismissing everything as fake and accepting something that we can positively identify as 100% real is our only path back to certainty. And sanity. View the full article
-
Are you falling into the comfort trap
In 2012, Google conducted research to identify the factors that determine effective teams. This research, now famously known as Project Aristotle, analyzed hundreds of teams and individual members to crack the code on what enables some to operate at high levels while others flounder. What their study revealed is something Harvard Business School professor Amy Edmondson had discovered almost two decades prior: the most important factor for high performing teams is psychological safety. That is to say, teams perform better when their members feel safe taking risks and being vulnerable with each other, without fear of punishment. Google’s watershed study brought light to Edmondson’s groundbreaking research and thrust psychological safety into the zeitgeist—and onto the tips of tongues of scholars, executive coaches, and business leaders alike across a wide array of categories. However, despite the adoption of this critical contribution to business practice, far too often, safety is erroneously mistaken for comfort—and the two couldn’t be more different. Safety is a matter of protection from harm, as in “I feel safe to jump off this rock” because the likelihood of harm is mitigated. Comfort, on the other hand, is a state of ease, where I feel comfortable jumping off the rock because it’s easy. You see the difference? One embraces risk because the consequences are low, while the other sees no risk at all. One leads to breakthroughs and the other leads to routine. Comfort, as the radio broadcaster Stan Dale once declared, is a “plush-lined coffin” that prevents individuals from stretching themselves, which subsequently mitigates the possibilities of their collective collaboration. With all the best intentions, I’m certain, many leaders attempt to foster a psychologically safe environment by ensuring their employees feel comfortable in the office. As such, they prioritize niceness and harmony over candor and conflict, unknowingly eroding the necessary conditions that help us do hard things and, ultimately, lead to innovations within an organization. Difficult things aren’t always comfortable, but that’s where growth and advancement happen. Therefore, our aim should not be to promote comfort from hard things, but rather, to create a space where people feel safe enough to try. I see this in the classroom every day. Some of the brightest minds across the globe enroll in the MBA program at the Ross School of Business, University of Michigan, to increase their business acumen and venture out into the world as the “leaders and best.” When these students enter the classroom, they expect to be challenged with new ideas and provocations because they know, intuitively, that this is where the learning happens. If they’re presented with something they already know, something easy, they don’t learn much at all. Therefore, in an effort to foster an environment where learning is optimized, the classroom can’t be comfortable (i.e. easy); it must be challenging enough to stretch them but safe enough for them to stretch. The psychologist Lev Vygostky, best known for his pioneering work on cognitive development, refers to this sweet spot of difficulty as the Zone of Proximal Development. This zone represents tasks that sit just outside of a student’s skill level and challenges them to stretch further with the assistance of a teacher who possesses greater knowledge or ability. It’s not easy, but it’s not impossible. It’s achievable but you have to jump to do it. If people don’t feel safe, they typically won’t jump. Therefore, it is the job of the instructional leader to facilitate a classroom environment where students feel protected enough to fail. Why? Because in these safe spaces, growth happens and the classroom improves. So, students ask “dumb questions” without fear of embarrassment. They say what could potentially be the wrong answer because they know if they miss the mark, they won’t be punished for it. They do it not because it’s easy, but because it’s not dangerous. The same thing goes in our organizations. If we want people to take big swings, to jump off the rock of comfort into the lake of big ideas, then we must reduce the risk, not the challenge. The differences lead to wildly different outcomes. We invited Sherlen Archibald, co-founder of idea agency We The Roses, onto the FROM THE CULTURE podcast to explore how his organization uses natural settings to foster safe environments that stretch teams to uncover new ideas and creative explorations. Check out the full episode here. View the full article
-
Italy probes LVMH-owned Sephora over ‘insidious’ skincare marketing to young girls
Competition regulator claims retailer and Benefit Cosmetics encourage purchases through ‘covert’ social media marketingView the full article
-
Wikipedia Bans Use Of AI-Generated Content via @sejournal, @martinibuster
Wikipedia's new AI guidelines prohibit editors from using LLMs for writing or rewriting content, with two exceptions. The post Wikipedia Bans Use Of AI-Generated Content appeared first on Search Engine Journal. View the full article
-
The March 2026 SEO Update by Yoast recap
The March 2026 SEO Update by Yoast is part of our monthly webinar series covering the latest developments in search and AI. Hosted by Carolyn Shelby and Alex Moss, this month’s session explored how AI is reshaping search, Google’s latest moves, and what brands should prioritize now. Watch the full recap on YouTube to dive deeper into these topics, hear audience questions, and see real-world examples. SEO and AI news from March 2026 AI tools become more personal and mobile AI is moving beyond standalone apps, integrating into messaging platforms (like Claude’s Telegram/Discord support) and desktop environments (e.g., Meta’s My Computer). This shift makes AI more accessible but also blurs the lines between search and daily tools. Why it matters: Brands must ensure their content is discoverable across multiple surfaces, not just traditional search engines. Actionable takeaway: Optimize for conversational queries and structured data to improve visibility in AI-driven tools. Google’s patent for AI-generated landing pages Google filed a patent describing a system that replaces traditional SERPs with AI-generated landing pages. This could signal the end of the “10 blue links” era, forcing brands to rethink how they measure visibility. Why it matters: If Google shifts to AI-generated pages, traditional ranking metrics may become less relevant. Brands will need to control their narrative across multiple sources to ensure accuracy in AI responses. Actionable takeaway: Audit your content for clarity and structure (e.g., avoid excessive JavaScript, use clear headings). Diversify your presence beyond your website (e.g., social media, YouTube, newsletters) to reinforce authority. Markdown as a preferred format for AI Markdown is gaining traction as a lightweight, AI-friendly format. WordPress.org now offers Markdown versions of pages, and tools like Cloudflare’s crawl endpoint make it easier for AI to parse content efficiently. Why it matters: While Google downplays Markdown’s importance, other AI tools may rely on it for grounding responses. Simplifying your content structure could improve visibility in AI-driven search. Actionable takeaway: Consider offering Markdown versions of key pages (e.g., FAQs, product descriptions) to help AI extract content. Avoid hiding critical information in images or complex JavaScript, as AI may not process it efficiently. Google Search Console adds branded vs. non-branded filter Google Search Console now includes a filter to separate branded and non-branded queries. This helps brands identify confusion in search intent and optimize accordingly. Why it matters: If non-branded queries drive traffic, it may signal an opportunity to refine messaging or target new audiences. Actionable takeaway: Use the filter to identify gaps in your content strategy (e.g., if branded queries dominate, expand into non-branded topics). Monitor for unexpected branded queries, which may indicate confusion or misalignment with user intent. Google Maps integrates AI for search Google Maps is testing an AI-powered chat feature that lets users ask questions (e.g., “Find a Starbucks on my route”). Early feedback suggests it’s not yet as accurate as traditional search, but this could evolve quickly. Why it matters: AI-driven local search could change how users discover businesses, making it critical to optimize for conversational queries. Actionable takeaway: Ensure your Google Business Profile is up to date with accurate hours, locations, and services. Use natural language in your content to align with how users phrase questions. Universal Commerce Protocol (UCP) expands Google’s Universal Commerce Protocol (UCP), an open standard for AI-driven e-commerce, added new features like cart management, catalog search, and identity linking (for loyalty programs). This aims to streamline shopping within AI platforms. Why it matters: UCP could become a standard for AI-powered commerce, making it essential for e-commerce brands to adopt early. Actionable takeaway: Explore UCP integration to improve visibility in AI-driven shopping experiences. Optimize product schema and ensure your Merchant Center data is accurate. Zero-click search doesn’t mean zero influence Rand Fishkin’s keynote at the Industrial Marketing Summit highlighted that while zero-click searches are rising, brands can still influence AI responses by maintaining a strong, consistent presence across multiple platforms. Why it matters: AI relies on corroborating signals (e.g., repeated mentions of your brand across trusted sources) to validate information. A single website isn’t enough, so you need a multi-channel strategy. Actionable takeaway: Repurpose content across platforms (e.g., LinkedIn, Substack, YouTube) to reinforce your brand’s authority. Ensure your messaging is consistent across all channels to improve AI’s confidence in your content. What to focus on in 2026 The March 2026 update highlighted several priorities for search strategy: Optimize for AI-driven search: Use structured data, clear headings, and consistent messaging to improve visibility in AI responses. Build brand authority across channels: Diversify your presence beyond your website to reinforce your narrative in AI-generated content. Prepare for agentic commerce: Adopt protocols like UCP and optimize product schema for AI-powered shopping. Avoid low-quality AI-generated content: Focus on high-value, human-centric content that aligns with user intent. Sign up for the next SEO Update by Yoast The next SEO Update by Yoast is on April 28, 2026, at 4:00 PM CET (10:00 AM EST). Sign up here to join the live discussion or get the recording. The post The March 2026 SEO Update by Yoast recap appeared first on Yoast. View the full article
-
AI visibility: What it is and how to grow yours in 2026
AI visibility is how often your brand appears in AI-generated answers. Learn to measure and improve it. View the full article
-
How to lead when nobody knows what’s coming
“If you can keep your head when all about you are losing theirs and blaming it on you, yours is the world, and everything that’s in it.” —Rudyard Kipling Right now, CEOs are confronting a grim reality. The global trade system that has underpinned business planning is unravelling. Ships pile up in harbor, supply chains that have taken years to build are being undermined, and the diplomatic relations that hold world trade together are fraying. The most destabilizing feature of our current situation is the uncertainty it breeds about the future. If leaders could reliably predict the next catastrophe, at least they could plan and prepare for it. But right now, the ground rules of global commerce (and global politics, but that is a separate story) are being rewritten in real time, and nobody can say where the next chapter will lead us. The natural human response to this kind of uncertainty is twofold. We try to reduce it and we try to control it. This kind of response is very understandable. There may even be an evolutionary element that makes it natural. However, it is also precisely the wrong mindset for businesses that want to thrive in the midst of this chaos. The Certainty Trap When the world becomes volatile and mysterious, we search desperately for information, for someone who can tell us what is coming. And while we’re doing that, we plan and plan and plan, as though by planning the future we can master it. This behavior might look like diligent and responsible leadership. Yet the mindset that accompanies it is often anything but. The desire to do something . . . anything . . . to feel a sense of control over the situation comes from an absence of composure. It also often reflects an unrealistic view about the world. Sometimes, there is nothing we can do to turn disorder into order. A refusal to accept these very real limits can lead businesses into a variety of forms of self-harm. The leader who can’t sit with not knowing will do almost anything to make the discomfort of uncertainty go away. They will commit to a plan not because it is the best option, but because having a plan feels better than having a question. And this then locks the organization prematurely into a position that will be hard to change. Options that were open are now closed off. Resources that could have been spread across multiple bets are concentrated in one place. The leaders who navigate chaos effectively do something rather different. Instead of seeking certainty where there is none, they tolerate the discomfort. They stay in the space of not knowing without rushing to fill it. This is not a form of passivity and it is not indifference—it is the type of composure that is a precondition for surviving a world that is turned upside down anew each and every day. Calm is a Competitive Advantage In a crisis, your workforce is afraid. They’re reading the same headlines you are. They’re wondering whether their roles will exist next quarter, whether the company will pivot in a direction that leaves them behind, and whether anyone at the top actually knows what’s going on. They are looking to leadership for a signal. A leader who is visibly emotional and reactive—lurching between strategies, radiating anxiety in every town hall—doesn’t just make bad decisions. They make it impossible for anyone else to make good ones. Anxiety spirals. People stop raising problems because the boss can’t handle more bad news. They stop proposing ideas because the strategic direction changes weekly. And then they disengage and start updating their resumes. The composed leader has a different effect. They do not pretend everything is fine—composure does not mean lying about reality. Instead, they acknowledge that things aren’t fine and that the future is uncertain—and then they show that uncertainty can be faced without panic. This allows them to see clearly and act effectively, and their steadiness also helps their people stay focused and think clearly. Rather than serving as the catalyst for an organizational anxiety spiral, the composed leader helps generate a competence spiral instead. The advantage that composure delivers isn’t just about providing a model for your team. It is also strategic. The reactive leader overreacts to noise and is unable to stay the course. The result is resources wasted on half-executed pivots and initiatives launched and abandoned before they can deliver. The composed leader, by contrast, can absorb bad news without treating it as an emergency and can hold a strategic position long enough to know whether it is working. In volatile environments, the ability to not react is just as, if not more, important than the ability to act quickly. This is counter-intuitive for a business world that has a striking bias towards action, but it is essential for leaders to learn this truth, as the future of their business may depend on it. Composure in Practice Here are three ways to bring composure into your leadership. 1. Start with yourself Knowing that composure matters is one thing. Actually cultivating it is another — and like any meaningful capability, it requires deliberate practice. Composure isn’t only a skill directed outward; it is, first and fundamentally, an inward discipline. A mindful organization requires a mindful leader: someone who manages stress, reframes risk, and fosters the creativity and clarity that crises demand. The good news is that cultivating inner composure doesn’t require a meditation retreat. Here is a simple technique you can practice at any point in the working day: S — Stop what you’re doing, if only for a moment. T — Take a breath, slowly and completely. O — Observe how you feel. What are you thinking about right now, at this very moment in time? P — Proceed. Return to what you were doing—but take notice. Do you feel refreshed? Can you see what you were doing from a different perspective? There is nothing complex about this technique, but that is precisely the point. It brings your conscious attention back to the present, giving you the chance to choose your response rather than simply react—and interrupting the fight-or-flight shortcuts that evolved for physical danger, not the pressures of leadership. 2. Don’t plan—create options instead In stable environments, leaders build plans—and in volatile environments, fixed plans can become liabilities. The alternative is to create options—to spread risk across multiple initiatives and to keep several paths open rather than committing prematurely to one. In practice, this means building and maintaining a diversified portfolio of initiatives—quick wins that generate immediate returns and fund the longer plays, medium-risk bets that deliver value over 12 to 18 months, and moonshots that could transform the business. Crucially, when one bet fails or the world shifts, the portfolio absorbs the shock. The organization survives because it wasn’t dependent on a single outcome. But running a portfolio is emotionally demanding. You’re funding things that might fail. You’re watching a competitor go all-in on one bet and wondering if they’re right. Anxious leaders can’t tolerate that ambiguity. They collapse the portfolio into a single bet at the first moment of pressure, because committing feels like control, even when it’s reckless. Composure is what allows a leader to resist that impulse—to hold the portfolio together long enough to see which bets will actually be rewarded by an uncertain world. 3. Bring your people into the process One of the most common failures of leadership in crisis is the retreat into isolation. Under pressure, leaders narrow their circle, make decisions behind closed doors, and then announce the outcome to an organization that had no part in shaping it. Collaboration is slow and messy, full of competing perspectives that make the path forward less clear, not more. It takes composure to tolerate that mess. But the mess is where the value is. People who helped shape the response are already prepared to execute it. Diverse perspectives surface risks that no single leader can see. And the cultural readiness that organizations need to navigate rapid change doesn’t happen after the strategy is set—it happens during the process of setting it. Keeping people close also means keeping them informed. In uncertain environments, silence is toxic—when people don’t hear from leadership, they fill the vacuum with worst-case assumptions. The composed leader resists the twin temptations of going quiet or manufacturing false certainty. Instead, they share what they know, acknowledge what they don’t, and describe the process by which decisions will be made. Simply saying “I don’t know, but here is how we will find out” is not a weakness. In a storm, it is exactly what people need to hear. The Leadership the Moment Demands Composure is not the absence of urgency. It is the foundation on which effective urgency is built. And this moment demands leaders who are composed—leaders who can hold steady when nobody knows what’s coming, who can keep their head when everyone around them is losing theirs. It’s quite simple, really. The most powerful thing a leader can do in a storm is to stay calm—and then get to work. View the full article
-
Top ‘I told you so’ moments in the history of science
Below, Matt Kaplan shares five key insights from his new book, I Told You So!: Scientists Who Were Ridiculed, Exiled, and Imprisoned for Being Right. Matt is a science correspondent at The Economist, where he has written about everything from paleontology and parasites to virology and viticulture over the course of two decades. His writing has also appeared in National Geographic, New Scientist, Nature, and the New York Times. What’s the big idea? Science often suppresses bold, unconventional, or threatening ideas due to ego, hierarchy, competition, sexism, and fraud. This culture harms progress. To truly serve society, science needs structural and cultural reform that protects integrity and encourages intellectual risk-taking. Listen to the audio version of this Book Bite—read by Matt himself—in the Next Big Idea App, or buy the book. 1. Stupidly silenced In the middle of the pandemic, I was interviewing researchers who were trying to defeat COVID-19 or help patients in hospitals. Something that blew me away during this period was how often I would hear really impressive ideas that I thought were worth reporting on, but then the scientist would say, “Oh no, no, no. You can’t say that.” And when I asked why, these are some of the responses I got: “Well, other scientists wouldn’t take me seriously anymore if you shared that.” “I’m a PhD student and the idea I just shared with you would be a threat to the work done by my PhD supervisor. I might be fired.” “Well, I really need to test my idea out extensively first and I’m never going to get funding for this, so it’s not even worth talking about or reporting on.” “This is immunology, Matt, and let’s face it, I’m a woman.” I thought this was nuts. We were in the middle of a pandemic with thousands of people dying, and I’ve got researchers who are saying, “Yeah, don’t share my ideas with anybody else because either my PhD supervisor won’t accept it, or other people might laugh at me, or because I’m a woman.” These are not good reasons to hide important ideas during a time when many people are losing their lives. Has science always been like this? Have we always had behaviors like this cropping up in the field? The answer is yes. 2. Punished for thinking outside the box Hungarian obstetrician Ignaz Semmelweis was based in Austria at the Vienna Hospital. Most of his work entailed delivering babies all day long. He was very, very good at it, but he was also deeply troubled by the fact that numerous women died shortly after delivery. And when they died, their baby almost always died too. Semmelweis was heartbroken by this reality and wanted to understand why. The disease was called childbed fever, and Semmelweis ran experiments trying to figure out the cause. It was killing one in 10 women after delivery. He ultimately worked out that it was the common practice of doctors visiting the morgue in the morning. Doctors were going there to dissect patients who had died the previous day because they wanted to understand why they hadn’t survived. This was important for academic learning, but it was a disaster for health. Yes, doctors washed their hands after handling dead patients, but the soap and water mechanism did not get rid of all the deadly bacteria growing on those corpses. As a result, doctors would then go up to deliver babies, and as they went up to mothers who were in labor, they would put their fingers inside to feel for the baby’s head, sometimes move the umbilical cord from around the baby’s neck, or just generally assist in delivery. Women who were treated by doctors who had only used soap and water to wash their hands were infected with bacteria from under the doctors’ fingernails. This caused childbed fever and was almost always lethal. “Semmelweis was ultimately fired, exiled back to Hungary, and forced into an insane asylum by his own peers.” Semmelweis developed a technique for washing hands with a chlorine solution that removed the bacteria and effectively eliminated childbed fever. It was a huge advancement. However, when he told other doctors to follow suit, he was vigorously criticized. The other doctors said, “Sir, we are gentlemen. How dare you tell us that our hands are dirty?” Nobody had any idea about bacteria at the time, so they couldn’t look at the microscope and demonstrate that these people all had dirty hands. Semmelweis was ultimately fired, exiled back to Hungary, and forced into an insane asylum by his own peers. Semmelweis’ story is effectively reflected by the modern Hungarian biochemist Katalin Karikó. Karikó had come to the United States as an expert in messenger RNA. She had demonstrated that messenger RNA could produce almost any protein within the body, and it could be used to develop drugs or treat diseases. Nobody believed that messenger RNA had any kind of future because whenever it entered the body, it broke apart. Karikó worked with an immunologist to demonstrate that, by using certain immune proteins on the messenger RNA, she could prevent it from falling apart inside the body and use it to help treat diseases. Ultimately, she and immunologist Drew Weissman created the COVID vaccine when she was based at BioNTech and Pfizer, two biotechnology companies. However, before she got there, she had been demoted by the University of Pennsylvania, fired and threatened with deportation by the US Department of State. More importantly, she couldn’t get funding. Nobody believed in her research. Without her resilience, we wouldn’t have the COVID vaccine. 3. Damned lies and journal articles There were two rural veterinarians in France, one named Henry Toussaint and another named Pierre Galtier. They’re unknown to most people, but they shouldn’t be. Toussaint effectively invented the anthrax vaccine in 1880. Galtier paved the way for the rabies vaccine to ultimately be invented in 1881. We don’t know their names because of a certain scientist who everyone knows: Louis Pasteur. Pasteur had worked hard to develop vaccines against both anthrax and rabies. He wanted the glory and reward for defeating both diseases. When he found out that two country-bumpkin veterinarians had effectively invented the vaccines he had been working on, he could not tolerate the notion that they would beat him to the punch. As such, he copied their techniques, lied about it, and then used his political clout with the French government to discredit and destroy both veterinarians. What’s particularly staggering about Louis Pasteur is how history has treated him. One scholar wrote, “His skillful exploitation of the political advantages that he enjoyed show that he was, in fact, the better scientist.” Another scholar wrote, “When considering his behaviors, you have to keep in mind the highly competitive context of mid-19th century French academic life.” Are you kidding me? Does the presence of a highly competitive environment make unethical behavior in some way excusable? And we still have this problem today. In 2023, Retraction Watch noted that almost 19,000 papers in the realm of biomedical research alone were retracted. Some retractions occur because of contamination errors or other mistakes during research, but the majority of papers retracted in 2023 were retracted for plagiarism or fraud. We cannot be operating like this. 4. Peer review or peer re-view Joseph Lister was working in the hospitals of Edinburgh and Glasgow during the Victorian period. During his work as a surgeon, he noted that postoperative infection was the leading cause of death after surgery. He worked out that he could prevent postoperative infection by drenching the wounds in carbolic acid and then disinfecting the surgical site with bandages soaked in the stuff during the healing process. While his findings were initially met with cautious interest, a fellow surgeon named James Simpson whipped the medical community into an aggressive frenzy against him. This forced Lister into silence for years. Simpson led the charge against Lister because he wanted to be the one to defeat postoperative infection first. Simpson had this theory that if you used a technique called acupressure, where you took little needles and stuck them into the wound around the surgical site, you would spread out the inflammation so that a big mass of surrounding tissue was inflamed rather than the one cutting site. He thought this would reduce the risk of postoperative infection. There was absolutely no evidence that his acupressure technique worked. Even so, being informed that carbolic acid could solve the problem he had been laboring to defeat was something he wasn’t willing to accept. “Simpson led the charge against Lister because he wanted to be the one to defeat postoperative infection first.” Attacking Lister was essential for the survival of his acupressure theory, and that’s exactly what he did. We still see this problem today. Scientists attack other scientists, not because their ideas are bad, but because the ideas are a threat to the territory that they’re currently exploring. We can’t have scientists shooting other scientists down just because they solved the problem first. Scientists are supposed to work together for the betterment of humanity. 5. What the heck do we do about it? With regard to fraud, we need to develop a system for tracking down researchers who are committing fraud. If you steal money from a bank, then you go to jail. If you commit fraud with research funding, that’s effectively stealing. There is no going to jail for that right now. At best, you get fired from your job at the university. That needs to change. We need to make sure that the minority of scientists who engage in fraud are punished. Similarly, we need to find ways to not punish scientists who have ideas that are outside the mainstream. Just because someone’s got a weird idea, if they’ve got a good reason for putting it forward and wrote a convincing proposal explaining how that idea can be explored, then we need to make funding available to them, too. We need to do this more often because, as things stand, we only fund research that is expected to work. That’s not helpful for coming up with creative solutions to big problems, like feeding eight billion people or defeating climate change. We also need to protect scientists in vulnerable positions. Researchers who are undergraduates or PhD students are afraid that their PhD supervisors will not like the ideas they come up with. That can’t stand. If a scientist, no matter how young they are, has an idea that is contrary to the ideas found in their lab, the university, or the greater scientific community, the university needs to be prepared to roll up its sleeves and say, “We need to give this interesting idea a fair shake.” Rather than, “Boy, that’s weird. Let’s throw it out just because it’s strange.” We can’t go on like this. A culture shift needs to occur in science to make space for fresh ideas. “We need to make sure that the minority of scientists who engage in fraud are punished.” And finally, we need to talk about the sausage-making. The Economist has long argued against the notion that you never want to see how laws and sausages are made because the process is disgusting. Well, we need to apply that to science, too. Talking about how science functions and malfunctions is important for people to understand. People are voters. They vote to support different kinds of funding and politicians who will support different types of research efforts. The public needs to know that scientists sometimes fail—and, in fact, failure is important. If we don’t fund those scientific efforts that take a gamble, we’re rarely (if ever) going to have the big breakthroughs we need. Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea app. This article originally appeared in Next Big Idea Club magazine and is reprinted with permission. View the full article