Skip to content




All Activity

This stream auto-updates

  1. Past hour
  2. Brandon Ervin, Director of Product Management for Google Search Ads, recently discussed campaign consolidation, AI Max, and what advertiser control looks like in 2026 on Google’s Ads Decoded podcast. The conversation was serious and informed, and reflected a product team that understands advertiser concerns and is actively working to address them. But the podcast is also incomplete. The gap between what Google said and what advertisers actually experience from their sales organization is large enough to warrant a direct response. Ervin’s team is doing genuinely good work, but the platform’s structural incentives haven’t changed. Google’s evolving product is creating problems faster than it can solve them. Performance is now measured on economic standards, shaping how a search ads audit is performed. Recent improvements to Google Search Ads Recentish improvements are genuine: Brand exclusions in Performance Max and Demand Gen. Site visitor and customer exclusions from PMax campaigns. Network-level reporting within bundled campaigns. Improved search term visibility. Brand and geo controls inside AI Max at the ad group level. Semantic modeling that doesn’t anchor on campaign or ad group IDs, reducing learning period risk during consolidation. These are meaningful. They are also solutions to issues introduced by bundling, opacity, and aggressive automation rollout. These products have been mercilessly shopped to advertisers since 2021, and the controls that make it usable arrived years after the sales push began. The ability to separate brand from non-brand traffic inside PMax/AI Max should not be framed as innovation. It restores a fundamental distinction that previously existed by default. The ability to see network performance inside a bundled campaign is not an expansion of control. It restores visibility that was removed. An audit must ask whether new tools are genuinely expanding control or merely reintroducing baseline transparency. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with Table stakes: What everyone agrees on Before the real audit begins, the fundamentals. These are uncontroversial and should already be in place: Run full ad extensions (sitelinks, callouts, structured snippets, image, call). Use automated bidding with intentional target-setting and conversion action selection (I recognize there are still holdouts here but seems crazy to me). Maintain negative keyword lists. Write ads relevant to the queries they serve. Audit automatically created assets for accuracy and brand safety. Cut Search Partners and Display expansion from Search campaigns. Separate brand and generic campaigns using brand controls. Exclude site visitors and past customers from prospecting campaigns where appropriate. Import offline conversion data (MQLs, SQLs, revenue, CLV, repeat rate,) to feed the algorithm downstream signals. Weight conversion values by actual downstream conversion rates. Account for mobile vs. desktop performance gaps. Those are table stakes. The real audit begins after that. What a 2026 search audit must focus on With the prevalence of AI, advertisers need to focus on reconstructing economic visibility in systems designed around aggregation and automation. Signal architecture In the podcast, Ervin says “control still exists, it just looks different.” Ad controls — where, when, and to whom ads appear — are still important and changing, some think, for the worse. The old ad controls — exact match, manual bids, network selection, and device modifiers — gave advertisers direct influence over where ads appeared and what they paid. However, the new controls are indirect. Control now lives in data quality, density, and selectivity. They influence the algorithm, but the algorithm makes the final call. An audit should focus on three questions: Quality: Are you importing revenue, pipeline stage, or qualified lead status, or only surface conversions? Density: Is there enough high-quality data for the model to learn from, or is it sparse and noisy? Selectivity: Are you intentionally limiting what Google can see, or are you passing everything indiscriminately? IMG With these new tactics, you only pass net-new customers or high-value customers. The majority of the time, it is better to just pass the densest and most predictive conversion set. Incrementality Google optimizes toward reported conversions, not incremental conversions. Brand search often captures existing demand. Retargeting often captures users already in motion. Pmax/AI Max frequently blends these signals. Ervin was asked: Are AI-driven campaigns over-indexing on warm brand traffic to inflate blended ROAS (return on ad spend)? He doesn’t dispute the problem, but points to partial solutions, including using brand controls, better theme your account, and looking at multi-campaign A/B testing. If incrementality is not measured, automation amplifies non-incremental signals. Marginal returns Google uses a blended cost-per-action (CPA). For example, the first $50K of spend might return a $30 CPA, while the next $50K might return $120. With automation, money is spent until the blended metric falls within tolerance, meaning the last dollar is not spent efficiently. The vast majority of advertisers are bidding far beyond what they should be and have no idea it is happening. An audit must: Plot spend against incremental conversions. Estimate marginal CPA at each spend tier. Identify diminishing return curves. Compare marginal CPA to lifetime value. A lower target makes the algorithm more selective, competing in fewer high-value auctions. Google doesn’t suggest this because that would mean less spend and lower bids are less effective in general. Query resolution and ability to lower targets On the podcast, Ervin acknowledges that some AI Max matches can “look a little wonky” and says his team is working on exposing the model’s reasoning. Query mapping has gotten meaningfully worse over the past several years: queries landing in the wrong ad groups, matching to keywords with different intent, and broad match pulling in traffic unrelated to the keyword. AI Max has accelerated this — there’s been an increase in the volume of irrelevant queries flowing through AI Max campaigns, with no connection to the advertiser’s business or keywords in the account. Meanwhile, Google’s recommendations consistently push toward broad matching and large themed ad groups. The issue is not whether broad match works, but whether high-value intent is being diluted in larger, broader ad groups. Fewer ad groups means that we cannot effectively or meaningfully lower targets without a massive structural negative schema, so performance differences have to be large enough to validate the new structure. An audit should: Extract full search term reports. Classify queries by intent tier. Compare CPA and lifetime value by query type. Quantify irrelevant or weakly related matches. Measure performance drift across match types. Network economics Performance Max and Demand Gen bundle multiple networks into single campaigns, but offer limited visibility into which networks drive results. This makes it hard to cut the underperforming ones. The slow rollout of network-level controls systematically benefits Google’s less competitive inventory. An audit must: Break out performance by network. Compare CPA and lifetime value by placement. Identify cross-subsidization. Determine whether weaker networks are relying on surplus from strong search inventory. Value redistribution Combining these elements in your audit will help you succeed in this new world of ad search: Non-incremental traffic inflates conversion counts, making performance look better than it is. Looser match types expand where ads appear, diluting intent precision and forcing fewer ad groups/spend and blanket-level targets/bids. No clean marginal return visibility means it is much more difficult to find the point of negative return Network bundling hides which channels actually perform. The cumulative effect is that the surplus value generated by your best inventory and high-intent, high-converting search queries gets redistributed across Google’s weaker inventory (i.e., Display, YouTube, Discover, Gmail, crazy tail queries). This is how to get a dwindling supply of valuable search queries to inflate the cost-per-clicks (CPCs) of low-quality inventory. The Ads Decoded episode: Is your campaign structure holding you back in the era of AI? View the full article
  3. That may sound defeatist, but unfortunately that’s just how the web works. Rankings slip, competitors improve, search intent shifts, and what was your best-performing article two years ago might be leaking traffic right now without you even noticing. This is…Read more ›View the full article
  4. Hello again, and welcome back to Fast Company’s Plugged In. On March 9, Jay Graber stepped down as CEO of Bluesky. She will become the social networking platform’s chief innovation officer, while Toni Schneider, a venture capitalist and former CEO of WordPress parent company Automattic, joins Bluesky as interim CEO. (I may be the last person left who also associates Schneider with Oddpost, an impressive browser-based email client he co-created way back before Gmail existed.) Graber explained her decision as stemming in part from a desire to turn the CEO role over to someone who can help scale up the platform. From November 2024 to January 2025, as Elon Musk’s role in Donald The President’s reelection prompted many Twitter users (including me) to hatch exit strategies, Bluesky added 10 million users. That turned out to be the peak of the network’s boom, at least so far; 10 million users is also how many it’s added in the past 12 months. It’s still growing, but not at the torrid pace that will get it to hundreds of millions of people anytime soon. If I had invested in Bluesky—which Schneider’s venture firm, True Ventures, has—I’d want to see it grow far larger. As an individual user, however, I find it quite pleasant at its current size. Maybe even cozy, in a way Twitter had stopped being long before Musk trashed it. (I also enjoy the even tinier Mastodon.) Should Bluesky ever get ginormous, I hope it manages to retain the intimacy that it kindles today. But I’m less curious about the future of Bluesky the social network than I am about the technology behind it. Called AT Protocol, it’s responsible for organizing all those users and posts so that the right people see the right stuff at the right time. And unlike the comparable infrastructure in place at behemoths such as Twitter, Facebook, and Instagram, it’s open. Anyone can create their own social network based upon AT Protocol, or remix an existing one (such as Bluesky) by tweaking its algorithm or other attributes. Users can preserve their personal social graphs even if they use several otherwise distinct networks based on the protocol. When I first talked to Graber in December 2023, Bluesky wasn’t yet fully open to the public, and had just 2.3 million members. She seemed as excited about AT Protocol as Bluesky itself, and told me she saw it as a potential antidote to social-media toxicity, moderation problems, and general user dissatisfaction with how the people who operate social networks do their jobs. If you didn’t like Bluesky as Graber managed it, you could switch to a version of the service powered by a different algorithm, or a wholly independent social network running AT Protocol. You wouldn’t even have to do so much as create a user account. From both a technological and cultural standpoint, that’s a way more grandiose goal than simply building a social network that’s bigger and better than Twitter. As someone who loved Twitter until I didn’t, I found it immensely appealing. Who wouldn’t want more control over their social presence? But a little over two years later, it remains a vision more than reality. Indeed, Bluesky has a festering reputation in some quarters as an obnoxious liberal bubble unwelcoming of other perspectives, which might not be a problem if people were remastering the network or creating new alternatives based on its technology. AT Protocol was hardly dead on arrival. There are hundreds of applications that use it, from Instagram and TikTok alternatives to a stock portfolio tracker to an app that puts Bluesky on your Apple Watch. Many are intriguing in their own right. But most are satellites revolving around Bluesky and its community, which was not the original idea. Even when I spoke to Graber in 2023, the possibility of an open social protocol changing everything was not exactly new. Mastodon, which turns 10 on March 16, is powered by ActivityPub, a standard with goals similar to AT Protocol. Meta incorporated a measure of ActivityPub support into Threads (kinda, sorta)—and it’s not clear how invested the company is in going further. Even more to the point, Twitter cofounder and former CEO Jack Dorsey has long said that he regrets that Twitter ever became a company. Instead, he contends, it should have been an open protocol all along. Toward the end of his time there, he channeled that belief into incubating two such protocols. One became Bluesky; the other is the lesser-known Nostr, whose homepage cheerfully acknowledges the challenge it faces with the tagline “An open social protocol with a chance of working.” I wish the best for everyone behind AT Protocol, ActivityPub, and Nostr, but I can’t help but wonder if the failure of the relatively small number of people interested in this stuff to coalesce around one protocol helps explain why progress has been so slow. (As computer scientist Andrew S. Tanenbaum waggishly put it in the 1980s, “The nice thing about standards is that you have so many to choose from.”) It’s as if the companies that made browsers had never agreed on the shared technological underpinnings that let us use Chrome, Safari, Firefox, or any of innumerable other options to explore the same World Wide Web. For now, I am attempting to stay active on Bluesky, Mastodon, and Threads, though it’s hardly a cakewalk. Openvibe, the app I used to post to all three, has become so unreliable lately that I’ve mostly given up on it. Flipboard CEO Mike McCue tells me that he wants to add crossposting to Surf—a wildly ambitious app, still in closed beta, that weaves together the entire internet into user-curated feeds—but is still figuring out how to do it well. The only long-term solution involves all of these networks—plus Twitter, Facebook, and many others yet to be born—settling on a protocol so universal that they all just work together, without 99.9% of us needing to stop and wonder why. I’m realistic about the daunting odds of this happening, but I haven’t given up. And I hope that Bluesky won’t either—regardless of where it goes under new management. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on fastcompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company MacBook Neo review: niceness on a budget Apple’s long-awaited laptop is even cheaper than the pundits expected, and still feels like a Mac. Read More → Phoenix has lived with Waymos longer than any U.S. city. Here’s what its mayor learned Mayor Kate Gallego talks about working with Waymo, redesigning cities for autonomous vehicles, and why robotaxis may reshape everything from parking to public transit. Read More → GoFundMe launches AI fundraising coach to help people raise more money The new tool drafts campaign messages, suggests titles and photos, and guides users on how to share their fundraiser. Read More → This new foldable phone may have upstaged Apple in the ‘zero-crease’ war Oppo’s Find N6 isn’t fully creaseless, but it’s close. Read More → OpenAI’s delayed ‘adult mode’ underscores the challenges of age-gating AI A lot is riding on OpenAI’s ability to separate older ChatGPT users from younger ones. Read More → The uncomfortable valley: Microsoft Teams emoji faces have got to go They don’t make the digital workplace more casual. They make it uncomfortably weird. Read More → View the full article
  5. The problem: ChatGPT doesn’t have “rankings”. At least not in any traditional sense. Its responses are probabilistic: different every time, with brands appearing and disappearing from one query to the next. According to research from SparkToro, there’s a <1 in…Read more ›View the full article
  6. As of yesterday, March 12, hundreds of thousands of innovators, disruptors, and leaders began descending on Austin for SXSW. If you search “Tech and AI” in this year’s schedule, you’ll find 185 results. That’s more than double the 80 AI sessions in 2024, the same year I wrote a Fast Company op-ed about how women have spent decades building the intellectual foundation of AI while receiving almost none of the credit. It was also the year that companies with at least one female founder raised $38.8 billion in venture capital funding which is a 27 percent increase from the year prior, but still not close to the high point in 2021 with a raise of $62.5 billion. Two years later and the gap—both in acknowledgement and investor funding—hasn’t closed. However, something else is happening and it’s worth paying attention to. There is a new wave of women who refuse to wait for the AI industry to become “fair” and “equal.” They are building their own companies, on their own terms, with a more authentic and purpose-driven design mentality. It’s not general-purpose AI; it’s gender-purpose AI. An important distinction Before you roll your eyes, the distinction matters more than you might think. By 2030—which is now only four years away—AI won’t just enhance companies’ business models. According to IBM, it will be the business model. Right now, that business model is being built unsurprisingly by male-dominated teams for general audiences. The truth is technology—as an industry and a concept—was never built for women. It was not built to prioritize or accommodate our visions. But that is changing. A new class of female leaders in AI is disrupting this model and demanding more room for gender-purpose AI and less patience for the influx of male-dominated teams building general-purpose tools. This is the year we move beyond celebrating their presence and start backing their vision with real investment. One of those women is Rana el Kaliouby, co-founder and general partner of Blue Tulip Ventures, who will deliver a keynote at this year’s conference titled “Why the Future of AI Must Be Human Centric.” She has spent more than two decades humanizing technology. As co-founder of Affectiva, she pioneered the field of Emotion AI, which reads human feeling through facial expression and vocal cues, and now as co-founder and general partner of Blue Tulip Ventures, she literally puts her money where her mission is and invests in early-stage startups building ethical AI that is good for people. The word “good” is subjective. But for too long, it’s been defined by the people building the problem, not solving it. The problem is also being solved by women like Valerie Chapman, CEO and co-founder of Ruth AI, an AI-powered career advancement platform. Last month, Valerie asked Sam Altman at an OpenAI builder town hall how AI can be used to fix the $1.6 trillion gender wage gap. His response was that AI should be an equalizing force in society and like Valerie pointed out in her recent op-ed, when AI is designed with intention, it can close the gap and it’s time to build it. What’s next As a fellow female founder helping brands understand and utilize AI—as a topic and technology—in their comms strategies, here’s what this shift tells me about where we are headed in 2026. Male tech leaders want AGI. Female tech leaders want gender-purpose AI. The second is more inclusive. When women build AI, they tend to ask different questions in the design and development stage. Questions like who is this actually for and who will benefit from these capabilities? The truth is artificial general intelligence, or AGI, is at least 10 years away and the race toward the “holy grail,” as Big Tech has coined it, should not hold as much power and influence as it does. Gender-purpose AI is a race toward something more rewarding and meaningful: relevance. What a concept—that we could have more technology that works for the people it claims to serve. The gender wage gap will not close with more women working in tech. It will close when more women are building tech. Representation matters every month, not just during Black History Month, Women’s History Month, or International Women’s Day. Women deserve representation in the very tools and technologies they depend on. With almost 78 million women in the American workforce, this is a demographic that has earned our time, attention and investment. Investment in gender-purpose AI means nothing without investing in the women who will build tomorrow’s innovations The increase in female founded and funded VC companies is a great step in the right direction. But the progress pipeline matters just as much if not more. We need more mentorship programs, technical education, access to capital for first-time female founders who have the vision but not a seat at the Big Tech table. To ensure we double down on gender-purpose AI as an industry, we have to prioritize and support the women who want to build what comes next. The milestones for women in AI aren’t just on stage. They are in hallways and in boardrooms. When women lead AI companies, the product looks different. Canadian computer scientist Joy Buolamwini pioneered ‘Gender Shades’ in 2018, which piloted an intersectional approach to inclusive product testing for AI and exposed racial and gender bias in Microsoft’s, IBM’s, and Amazon’s facial recognition systems and insisted they change. Rana built technology that reads human emotion because she believed machines should understand people, not just process them. These are real-world use cases that prove that whoever builds the technology determines what the technology does and who it serves. In 2026, women won’t be waiting for “the next big thing” because they will be the ones behind it. They will be the ones building the technology that addresses what male leaders have not addressed: equity, inclusion, and a redefinition of “good” that finally reflects what 51% of the world wants, needs and deserves. It’s time the other 49% joined us. View the full article
  7. The latest PPC Pulse highlights Google’s agency-focused Merchant Center rollout, Smart Bidding guidance for new campaigns, and emerging AI usage trends in PPC. The post Merchant Center Expands, Google Clarifies Smart Bidding, State Of PPC Report – PPC Pulse appeared first on Search Engine Journal. View the full article
  8. Today
  9. Usually the epitome of good humor, my friend was seething. She had devised a zany and creative marketing idea for her firm. Securing the budget, designing a content strategy, hiring a creative agency, and then doing all the related work had consumed Alex and her team for a full six months. This was on top of their already demanding jobs. And then the unthinkable happened. “Before the idea was announced, one of my coworkers, a PR guy, shared the idea—my idea—with the CEO and CMO.” I watched her pace around my kitchen, her face getting redder and redder. “While he didn’t exactly say he’d done the work himself, how he talked about it made it seem like it was all his.” “Did you tell anyone, go to your manager?” I asked. Alex stopped her pacing. “I did, and he said, ‘When you’re creative, people will steal your ideas—you should just get used to that fact.’” As we talked, I could hear that under Alex’s anger was something else—curiosity. About what this all meant. About what she could have, or should have, done differently. Was she the problem? Did she need to figure out how to play the game better? Was the PR guy the issue? Or her boss? And if it was her boss, did she need to quit? Those were the wrong questions. It’s not you or them. The problem lies in the norm of tolerating bad behavior. When workplaces say, “Creative ideas get stolen,” harm becomes a given, not a choice. Ideas get stolen because there’s no accountability. To be clear, sometimes an idea is just in the air, and two or more people come to it around the same time. And oftentimes, we create ideas together. I’m not talking about those moments. I’m talking about when it’s fully apparent what is happening—idea theft, where one party takes credit for the work of others—and how that theft is tolerated. Research shows that knowledge workers are keenly aware of idea theft; nearly one-third report having had it happen to them. Work often treats idea theft as no big deal. But the cost is real. • Integrity is lost when ideas are disconnected from their source. The depth of the concept or the completeness of the thinking is lost. Downstream decisions are made without the rootedness of the original inspiration. • Theft demotivates the next idea. When ideas are stolen regularly, idea generation shuts down because no one volunteers to be violated. And Alex’s boss was right about one thing: Alex will certainly create more ideas. People create when they feel safe enough to imagine something new. That—by definition—is why regulating bad behavior matters. The idea that was stolen? It became one of the firm’s most successful efforts that year. It inspired the company’s next ad campaign and even a Super Bowl spot. But they didn’t have any follow-up to this one-off success. Why? Because they no longer had Alex. The Counterintuitive Insight: We Can Take Care of Our Commons Most of us are taught to stay quiet. Don’t make a scene. Go along to get along. And when someone crosses a line—steals credit, dominates meetings, dismisses ideas—we assume someone in authority will fix it. But that assumption hides a deeper truth: the rules of our workplaces are not enforced by leaders alone. They are enforced by what we tolerate together. In 2009, political economist Elinor Ostrom won the Nobel Prize in economics for proving something that ran against decades of economic orthodoxy. Before her work, economists widely believed in the “tragedy of the commons”—the idea that when a resource is shared, individuals will inevitably overuse it and destroy it. The only solution, it was thought, was top-down control: private ownership or government regulation. Ostrom proved otherwise. She showed that communities, left to their own devices, often devise highly sophisticated systems of shared management—systems where consequences don’t come from a distant authority but from the group itself. The people who depend on each other can also hold each other accountable. Her work wasn’t about office politics. But it applies. Every team shares something. It might not be water or grazing land. But trust. Energy. Credit. Voice. And just like natural resources, these intangible goods are depleted when people act only in their own interests at the expense of shared interests. When a manager takes all the credit. When someone interrupts constantly. When emotional labor always falls on the same shoulders. What Ostrom teaches us is that we don’t have to live inside that dynamic. We can protect shared goods—not with permission from the top, but through practices we design ourselves. Through consequences we create and apply together. Shared spaces survive when the people inside them protect them. Change the Norm When something harmful happens at work, our instincts split: ignore it or wait for someone in charge to handle it. But silence has a cost. It makes us complicit in what we ache to change. Monica Lewinsky—dragged through the mud of a scandal she didn’t create alone—calls on us to be upstanders: people who don’t just stand by, but stand up. Who see cruelty and choose courage. Who see harm and refuse to treat it as normal. Research shows that when bystanders step in, bullying stops within seconds—proving that empowering peers to act can cut bad behavior in half. What we allow becomes the rule of the room. When someone steals an idea, and no one says anything, the norm survives. When someone names it—calmly, clearly—the rule changes. But let’s be clear: This isn’t work any of us do alone. If bad behavior is tolerated, it grows. When it meets consequences, it stops. Bad behavior isn’t mysterious—it’s simply a crime of opportunity, repeated when no one intervenes. This is not a personal problem. It’s a social problem. It’s up to those who see it to act—to create the consequences. Not just to protect the harmed, but to stop the harm from spreading. Behavior doesn’t change because people suddenly become better. It changes because someone names what’s happening and refuses to treat it as normal. When you do, you won’t do it alone. Another person will join in. And then another. Until teams decide, we can be clear, fair, and firm with each other. That our shared space is worth defending, protecting. Let yourself run toward that danger, not away from it. Adapted from the book Our Best Work: Break Free from the 24 Invisible Norms That Limit Us, by Nilofer Merchant. Copyright © 2026 by Nilofer Merchant. Reprinted by permission of Harper Business, an imprint of HarperCollins Publishers. View the full article
  10. We recently started a small project to clean up how parts of our systems communicate behind the scenes at Buffer. Some quick context: we use something called SQS (Amazon Simple Queue Service. These queues act like waiting rooms for tasks. One part of our system drops off a message, and another picks it up later. Think of it like leaving a note for a coworker: "Hey, when you get a chance, process this data." The system that sends the note doesn't have to wait around for a response. Our project was to perform routine maintenance: update the tools we use to test queues locally and clean up their configuration. But while we were mapping out what queues we actually use, we found something we didn't expect: seven different background processes (or cron jobs, which are scheduled tasks that run automatically) and workers that had been running silently for up to five years. All of them doing absolutely nothing useful. Here's why that matters, how we found them, and what we did about it. Why this matters more than you'd thinkYes, running unnecessary infrastructure costs money. I did a quick calculation and for one of those workers, we would have paid ~$360-600 over 5 years. This is a modest amount in the grand scheme of our finances, but definitely pure waste for a process that does nothing. However, after going through this cleanup, I'd argue the financial cost is actually the smallest part of the problem. Every time a new engineer joins the team and explores our systems, they encounter these mysterious processes. "What does this worker do?" becomes a question that eats up onboarding time and creates uncertainty. We've all been there — staring at a piece of code, afraid to touch it because maybe it's doing something important. Even "forgotten" infrastructure occasionally needs attention. Security updates, dependency bumps, compatibility fixes when something else changes. This led to our team spending maintenance cycles on code paths that served no purpose. And over time, the institutional knowledge fades. Was this critical? Was it a temporary fix that became permanent? The person who created it left the company years ago, and the context left with them. How does this even happen?It's easy to point fingers, but the truth is this happens naturally in any long-lived system. A feature gets deprecated, but the background job that supported it keeps running. Someone spins up a worker "temporarily" to handle a migration, and it never gets torn down. A scheduled task becomes redundant after an architectural change, but nobody thinks to check. We used to send birthday celebration emails at Buffer. To do this, we ran a scheduled task that checked the entire database for birthdays matching the current date and sent customers a personalized email. During a refactor in 2020, we switched our transactional email tool but forgot to remove this worker—it kept running for five more years. None of these are failures of individuals — they're failures of process. Without intentional cleanup built into how we work, entropy wins. How our architecture helped us find itLike many companies, Buffer embraced the microservices movement (a popular approach where companies split their code into many small, independent services) years ago. We split our monolith into separate services, each with its own repository, deployment pipeline, and infrastructure. At the time, it made sense: each service could be deployed on its own, with clear boundaries between teams. But over the years, we found the overhead of managing dozens of repositories outweighed the benefits for a team our size. So we consolidated into a multi-service single repository. The services still exist as logical boundaries, but they live together in one place. This turned out to be what made discovery possible. In the microservices world, each repository is its own island. A forgotten worker in one repo might never be noticed by engineers working in another. There's no single place to search for queue names, no unified view of what's running where. With everything in one repository, we could finally see the full picture. We could trace every queue to its consumers and producers. We could spot queues with producers but no consumers. We could find workers referencing queues that no longer existed. The consolidation wasn't designed to help us find zombie infrastructure — but it made that discovery almost inevitable. What we actually didOnce we identified the orphaned processes, we had to decide what to do with them. Here's how we approached it. First, we traced each one to its origin. We dug through git history and old documentation to understand why each worker was created in the first place. In most cases, the original purpose was clear: a one-time data migration, a feature that got sunset, a temporary workaround that outlived its usefulness. Then we confirmed they were truly unused. Before removing anything, we added logging to verify these processes weren't quietly doing something important we'd missed. We monitored for a few days to make sure they were not called at all, and we removed them incrementally. We didn't delete everything at once. We removed processes one by one, watching for any unexpected side effects. (Luckily, there weren't any.) Finally, we documented what we learned. We added notes to our internal docs about what each process had originally done and why it was removed, so future engineers wouldn't wonder if something important went missing. What changed after clean upWe're still early in measuring the full impact, but here's what we've seen so far. Our infrastructure inventory is now accurate. When someone asks, "What workers do we run?" we can actually answer that question with confidence. Onboarding conversations have gotten simpler, too. New engineers aren't stumbling across mysterious processes and wondering if they're missing context. The codebase reflects what we actually do, not what we did five years ago. Treat refactors as archaeology and preventionMy biggest takeaway from this project: every significant refactor is an opportunity for archaeology. When you're deep in a system, really understanding how the pieces connect, you're in the perfect position to question what's still needed. That queue from some old project? The worker someone created for a one-time data migration? The scheduled task that references a feature you've never heard of? They might still be running. Here's what we're building into our process going forward: During any refactor, ask: what else touches this system that we haven't looked at in a while?When deprecating a feature, trace it all the way to its background processes, not just the user-facing code.When someone leaves the team, document what they were in charge of, especially the stuff that runs in the background.We still have older parts of our codebase that haven't been migrated to the single repository yet. As we continue consolidating, we're confident we'll find more of these hidden relics. But now we're set up to catch them and prevent new ones from forming. When all your code lives in one place, orphaned infrastructure has nowhere to hide. View the full article
  11. Zyxel Network's new FWA7 solution is packed with loads of features and performance - and it could be just what ambitious WISPs are looking for. The post Zyxel Networks targets US & UK WISP/MSP markets with world’s first Wi-Fi 7 standard power 6 GHz dual-band PtMP FWA solution appeared first on Wi-Fi NOW Global. View the full article
  12. 7SIGNAL says the solution to AIOps involves making network-wide Wi-Fi (and other) data accessible to AI platforms via MCP. The post New paper from 7SIGNAL: Maximise enterprise networking operational benefits with MCP-based AI integration appeared first on Wi-Fi NOW Global. View the full article
  13. During an end-of-the-fiscal-year spending spree last year, the Department of Defense (DoD) dropped some dough on new Herman Miller furniture. The DoD spent $60,719 for chairs from the Michigan furniture manufacturer last September, according to the report from the watchdog group Open The Books, including at least one $1,844 Aeron Chair, the brand’s popular, ergonomic, fabric-meshed office chair. The Herman Miller purchases were just a small fraction of the record $93 billion detailed in the report, which was more than the DoD has spent in a single month since the group’s data goes back to 2007. For Herman Miller, its share was peanuts, considering the company is the longest holder of a federal government contract for office furniture, at more than 40 years. (Herman Miller did not respond to a request for comment by publication.) The DoD goes on an annual spend-it-or-lose-it buying spree every fall no matter the president or party, Open The Books found over a decade of tracking it. The group called on Defense Secretary Pete Hegseth to rein in the use-it-or-lose-it approach the agency takes to its budget. Instead, 2025’s spending was a record. While some line items highlighted in the report seem like clear attempts to run up expense reports before the time runs out, like $98,000 on a Steinway & Sons grand piano and $2 million on Alaskan king crab, office furniture purchases at least make practical sense. With nearly 3 million military and civilian employees, the DoD is one of the largest employers in the U.S. That’s a lot of butts in seats, which means a big budget for chairs and other office furniture. Open The Books found furniture purchases spike 564% every September over the monthly average across the other 11 months of the year. Last year, the DoD spent $225.6 million on furniture in total. Herman Miller’s parent company MillerKnoll had obligations of more than $15 million in the last fiscal year, and the DoD makes up 80% of its awarding agencies. In the past, the Defense Advanced Research Projects Agency (DARPA) spent nearly $250,000 on Herman Miller furniture for a conference room “refresh,” according to Open the Books, and Federal Emergency Management Agency (FEMA) spent $284,000 on Herman Miller furniture for its conference center. For defense officials looking to set up an office, Herman Miller offers DoD-approved options for everything from desks, carts, and lockers to nurses’ stations, pharmacies, and labs. This isn’t the kind of workplace interior design work that Ikea was built to handle. For Herman Miller, though, its volume of government sales isn’t what it used to be. Federal spending records since 2008 show MillerKnoll’s transactions peaked during former President Barack Obama’s administration, with obligations totaling more than $174 million dollars in 2010, a figure that dropped to a low of more than $12 million in 2023. While the DoD might not be as loyal a customer as it once was, Herman Miller has found other government work elsewhere. The company says it’s one of the largest furniture suppliers to state and local government agencies. View the full article
  14. Here is a number worth sitting with: 295%. That’s how much U.S. app uninstalls of ChatGPT surged in a single day last month, after OpenAI struck a deal with the Department of Defense that its rival Anthropic had publicly refused to sign. In the same 24-hour window, Claude’s downloads jumped 51%. By that evening, Anthropic’s app had climbed to No. 1 on the U.S. App Store, leapfrogging 20 apps in under a week. One values-driven decision. One weekend. A measurable transfer of market share. Most of the coverage framed this as a political story. It isn’t. Or at least, not only. It’s also a brand loyalty story. And it tells us something important about the category war that’s actually being fought in AI, one that has very little to do with compute power. The Switching Cost Nobody Is Naming Brand strategists understand switching costs intuitively. In banking, insurance, enterprise software—anywhere the friction is high—emotional and values-based factors end up doing as much heavy lifting as product performance. The category with the highest rational switching cost often becomes the category where trust matters most. AI is moving toward that same dynamic, faster than most people are ready for. An AI platform doesn’t just perform tasks. It accumulates context. It gets to know us—how we think, our shorthand, our working rhythms. For enterprise users in particular, this depth compounds quickly. The longer a business embeds an AI platform into its workflows, the higher the exit cost becomes, not just technically, but cognitively, culturally, and even emotionally. There’s a name for this: the relational cost. It’s the switching cost nobody in the AI conversation is actually naming. And in any high-switching-cost category, the ‘brand’ question—what does this company stand for, and do I trust it—eventually becomes the definitive one. Operationalizing Values Is Not the Same as Talking About Them The consumer response to the DoD news didn’t come out of nowhere. It was the visible payoff of a positioning strategy years in the making. Anthropic has been making a consistent, operationalized argument about what kind of company it is—and backing it with choices that have visible cost. The Claude Constitution is a publicly available, inspectable training framework. Not a mission statement—a framework. Anthropic’s Economic Index analyses AI adoption across sectors and positions the company as a participant in the difficult societal conversation about AI’s impact on employment, not just a product vendor. These are category-shaping moves, not PR. The market had been registering these signals quietly, long before last month. Independent analyses suggest Claude holds 32% of enterprise AI usage, significantly disproportionate to its 3.5% consumer footprint. Enterprises—more deliberate, more risk-averse, more consequentially exposed to AI failure—have already been choosing Claude at scale. That gap between enterprise and consumer adoption isn’t a coincidence. It’s a trust premium. The Cost of Caring It’s easy to have values when they cost you nothing. For Anthropic, these came with a $200 million price tag. That’s the suggested value of this contentious Pentagon contract. Furthermore, the supply-chain risk designation—a label the The President administration has now formally applied, and which Anthropic is challenging in court—threatens hundreds of millions more across broader government contracts. This damaging designation, historically reserved for foreign adversaries like Huawei, has never before been applied to an American company. That is a real commercial cost, not a hypothetical one. But what looks like a ceiling from one angle looks like a moat from another. In the weeks since the dispute went public, Anthropic’s revenue run rate has nearly doubled—from $9 billion at the end of 2025 to almost $20 billion today, according to Bloomberg. The government closed a door. The market opened several more. That is not a coincidence. That is what trust, operationalized and defended under pressure, looks like as a growth strategy. So What Does This Mean for Your Business? The question that should be on the table in every leadership meeting right now: which AI platforms are you building on, and have you thought seriously about what that association means for your brand? AI platforms are no longer neutral infrastructure. They carry values, make visible choices, take public positions. The AI your business relies on is becoming part of your brand. When a platform’s ethics come into question—as they periodically and inevitably will—that exposure travels upstream to every company in its orbit. This creates both a risk conversation and a strategic opportunity. Evaluating AI partners on trust and values criteria, not just capability benchmarks, is the kind of decision that looks obvious in hindsight and prescient in the moment. The Brand Codes Are Being Written Now Early positioning in emerging categories hardens fast. The companies that define what a space stands for, not just what it does, shape expectations for years. We saw it with social media, with streaming, with fintech. In each case, the brands that defined the category’s values, not just its features, built loyalty advantages that capability alone couldn’t disrupt. AI is at that moment. The conversation about what kind of category this is going to be is happening now, in public, in real time. Stop asking which AI is most capable. Start asking which AI your business can afford to be associated with. Because our whirlwind romance with AI is fast turning into something more serious; committed, often exclusive, long-term relationships where platform loyalties get more embedded and more entrenched by the day. Choose carefully. Credibility compounds faster than compute. The data is already proving it. View the full article
  15. At a time when mainstream brands live in fear of getting dragged into a contentious political landscape, there’s something curiously benign, almost feel-good, about “Florsheimgate.” If you’ve somehow missed it, this particular instance of an involuntary pop-culture brand cameo came about following press reports this week that President Donald The President has become an enthusiast—and de facto brand ambassador—for Florsheim dress shoes, gifting pairs to cabinet members and media allies. The upshot is that less-than-$150 Florsheims have become “the hottest and most exclusive MAGA status symbol,” according to The Wall Street Journal. But more to the point, administration insiders who don’t find the brand “hot” in the slightest, and would likely prefer more luxurious footwear, are sticking with the shoes The President gives them—even, weirdly, if they don’t fit. This naturally caught the attention of MAGA critics, who promptly lit up social media with mockery of the 79-year-old president’s taste and allegedly Stalinesque bullying of his compliant minions. And this included some collateral damage for the venerable, and some might say dowdy, Florsheim. But really, even the inevitable dunking (what a dated mall brand!) seemed good-humored. “Florsheim,” one Bluesky user wrote. “When a Gift From Wicks n’ Sticks Just Isn’t Enough.” Others added comments like “florsheim didn’t go out of business in like 1978?” and “Florsheim shoes? Man, that guy’s brain really is stuck in the 80’s” and “Ok I give. What’s Florsheim.” And of course plenty of memes. I get the feeling we’ll be discussing Florsheim shoes today. — 𝕊𝕦𝕟𝕕𝕒𝕖 𝔾𝕦𝕣𝕝 (@sundaedivine.lol) 2026-03-11T10:18:31.168Z Funny, but well short of a dangerous brand backlash. Nobody’s demonizing Florsheim-wearers in general, putting out videos of shooting up loafers, or organizing a grassroots brand-oppo campaign on behalf of Vuitton loafers. To the contrary, it seems, at worst, to be a short-term, almost charming free publicity reminder to those who don’t know that the brand is still around—and, apparently, thriving. Turns out, Florsheim enjoyed “record” wholesale sales of $92 million in 2025, according to parent Weyco Group’s most recent earnings release and call earlier this month, “demonstrating resilience in a declining market for non-athletic brown shoes.” The Florsheim brand has a choppy history dating all the way back to 1892. Worn by everyone from Harry Truman to Michael Jackson, it’s a brand deeply embedded in American consumer culture, a staple brand of the suburban shopping mall’s heyday. But it also endured a bankruptcy filing in 2002. It’s now part of the Weyco Group, whose CEO is Thomas Florsheim Jr., a fifth-generation Florsheim. (Sales of other Weyco brands Nunn Bush, Stacy Adams, and Bogs were down last year, dragging down revenue and earnings for the company overall.) Weyco did not respond to an inquiry from Fast Company, but CEO Florsheim told The Journal he was not aware of The President’s orders (and declined further comment). In the conference call (which predated this week’s The President fandom news), the CEO was upbeat, calling Florsheim “one of the few men’s [shoe] brands outside of the athletic category to sustain this level of post-pandemic growth. While the non-athletic brown shoe category has been in secular decline, Florsheim has bucked the trend and gained market share.” Whether that’s true or not, the association with The President seems more like a passing entertainment than a brand controversy. At a moment of profound tension brought on by war and the threat of a new global oil crisis, Florsheimgate didn’t land like a point of contention; it was more like comic relief. In an interesting footnote, Weyco noted in its earnings call that tariff impacts—which CEO Florsheim has groused about in the past—“significantly affected gross margins” in 2025. Those tariffs have since been judged illegal by the Supreme Court, and the company “is optimistic about retrieving $16 million from tariff refunds.” Maybe The President’s Cabinet members should keep a spare pair of another brand’s loafers at the office, just in case Florsheim goes out of fashion at the White House. View the full article
  16. The latest accusations suggest a manager instructed a loan officer to photograph confidential data and process it in ChatGPT to avoid detection. View the full article
  17. For the first time that I can remember, this year I was completely enthralled by the Winter Olympics. In fact, I don’t think I’d ever watched the Winter Games before, but it really caught my attention this go-round. One event that really stood out for me was the skeleton. For the uninitiated (like I was just a month ago), the skeleton is a slide-based sport where athletes lie face down, headfirst, on a small slide going 80 mph down an icy, declining slope. On the surface, it doesn’t look like it requires much from the athlete but to lie down and hang on for dear life until crossing the finish line. But upon further inspection, the sport is far more intricate, requiring the athlete to make subtle adjustments with their shoulders, knees, and even their toes to control and steer the sled. The slightest weight shifts can make the difference between first place and last. As if the Olympics weren’t competitive enough, the margin of error in this event is miniscule. I was fascinated, particularly about the idea of finding balance. There’s so much talk about work-life balance, work-self balance, and just about any other “something-something” balance where the two somethings seem to be at odds with each other. To find balance, we make subtle adjustments throughout our days and weeks—blocking off time, making time, taking time—in hopes of steering our lives and maintaining control of ourselves. However, according to Misan Harriman, balance is less of an “act” and more of a series of choices that informs action; it’s not what we decide to do but who we choose to be. Raw and honest moments of humanity Harriman is a photographer, activist, and Oscar-nominated filmmaker whose work has been prominently featured in publications like Vogue, celebrated on awards stages, and widely shared throughout the zeitgeist. His work captures the raw and honest moments of humanity—in resistance, grief, joy, and all the many manifestations of our true existence. Our conversation with Harriman on the From the Culture podcast explored the balancing act of profitability and principle, where he argues that “profit at all costs” carries a heavy price tag that can cost us our authenticity. We make decisions at work that call into question the integrity of who we perceive ourselves to be outside of the office. Tech CEOs sell products to schools that they hardly ever let their own children use. Managers treat their subordinates in ways that would anger them if it were something their spouse had to endure. Whether it’s the way we communicate with peers or manage our presentation of self at work, far too often there is an imbalance between ourselves—who we say we are and how we are. Our inconsistent performances of self not only cause harm in our work but can also cause a crisis of authenticity. Fittingly, sociologist Erving Goffman likens the theatrical stage to the dynamics of social living, borrowing from William Shakespeare’s comedy As You Like It, where he writes, “All the world’s a stage, and all the men and women merely players.” Our presentation of self, as Goffman posits, is a choice we make. We decide which character we choose to play in social life. This choice subsequently demands a series of decisions that coincides with said character. The costumery. The script. The mannerisms. The exits and entrances. They are all by-products of the character we choose to play. That is to say, who we choose to be informs how we choose to be. A choice of character Through this lens, the balancing act of work-life or work-self is a choice of character and commitment to it. And although we attempt to balance the existence of two characters with adjustments here and there, like the athletes in the skeleton event, these seemingly subtle shifts of self can have tremendous impact. The idea then is to remain true to self, one character that is consistent despite the context. This is, after all, the definition of authenticity. As Goffman warns, we should pay mind to the mask we choose to wear because if we aren’t careful, our mask could soon become our face. This means we have agency in the matter. We can decide who we want to be and, therefore, how we’re going to behave. We have a choice; but when we don’t choose, the context will certainly choose for us. Check out our full conversation with Misan Harriman on the latest episode of From the Culture here on Spotify or wherever you get your podcasts. View the full article
  18. Google's Gary Illyes offered a candid overview of Googlebot, explaining there are hundreds of crawlers that are not publicly documented. The post Google Says They Deploy Hundreds Of Undocumented Crawlers appeared first on Search Engine Journal. View the full article
  19. The modern workplace is designed for early risers. But only about 30% of people are true morning types. The rest fall somewhere in between—or toward the later end of the spectrum (those who think, create, and perform best later in the day). Through my work implementing circadian health and performance in organizations in 17 countries, I’ve discovered three strategies to help night owls create workdays that protect their energy, creativity, and well-being so they can perform better and share their true talents. 1. Give yourself a slow start As a night owl, your day simply starts later—and that’s by design. Give your body time to wake naturally and ease into the day without rushing. Morning daylight (outside) can help, as it’s your internal clock’s strongest synchronization signal. Get at least 20 minutes of daylight before noon. This exposure won’t turn you into a morning person, but it helps stabilize your rhythm, reduce social jet lag, and boost alertness when your day begins. Magne, a late chronotype I work with, thrives when he can start his day quietly and let his energy build through the morning. When he aligns his schedule with his rhythm—working deeply in the afternoon and protecting calm mornings—his focus and creativity soar. If your organization’s rhythm starts earlier than yours, make micro-adjustments: Move demanding work to the afternoon, take short daylight breaks, or negotiate one or two later start times per week. Even small shifts can make a measurable difference to your sleep quality and mood, because they help protect the REM sleep that fuels creativity and emotional balance. Most of your REM sleep happens in the final hours of the night—so when an alarm cuts off those last one to two hours, you can lose up to half of your REM. Small changes like these help you reclaim that vital recovery time and bring your body back in sync. 2. Do your hardest work later Your performance peaks in the afternoon or evening. Use those hours intentionally for strategy, problem-solving, and creative work. If you have some flexibility to set your work schedule, protect late-day focus blocks where you can work without interruption. And always set a clear end time so that your late energy doesn’t steal the sleep that refuels it. You thrive when working in the evenings, but turn off your computer at least one hour before you go to bed. The light from screens delays melatonin and can push your sleep window even later. 3. Schedule afternoon exercise Your body is at its physical best later in the day. Research shows that late chronotypes perform up to 26% better in the afternoon and evening compared to the morning. Strength, flexibility, and coordination all peak as your temperature and alertness rise. That’s why it’s important to schedule exercise in the afternoon or early evening, when your body is naturally primed. It’s not just better for performance—it also supports sleep quality by helping you wind down gradually. Evenings are also when your social energy is highest. Many cultural and social activities—concerts, theater, dinners, and gatherings—are already designed for night owls. When you align your day with your biology, you protect your energy and unlock your full potential. And when leaders replace moral judgment with biological understanding, they unlock trust, creativity, and genuine performance. As jazz legend Miles Davis put it: “Sometimes it takes a long time to sound like yourself.” Designing your workday around your chronotype is one of the fastest ways to sound—and work—like yourself. View the full article
  20. The U.S. military was able “to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela. While Claude is only a few years old, the U.S. military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today. In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the U.S. Navy, I find that digital systems are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses. Myth and reality in military AI Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the people who use them. In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of considerable debate. Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration, and cybersecurity. Claude is an example of a decision support system, not a weapon. Claude is embedded in the Maven Smart System, used widely by military, intelligence, and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities. The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions. Researcher Craig Jones explains how the U.S. military is using artificial intelligence in its attack on Iran, and some of the issues that arise from its use. The long history of military AI Weapons with some degree of autonomy have been used in war for well over a century. Nineteenth-century naval mines exploded on contact. German buzz bombs in World War II were gyroscopically guided. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the U.S. Patriot system, have long offered fully automatic modes. Robotic drones became prevalent in the wars of the 21st century. Uncrewed systems now perform a variety of “dull, dirty, and dangerous” tasks on land, at sea, in the air and in orbit. Remotely piloted vehicles like the U.S. MQ-9 Reaper or Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and strikes. Combatants in the Russia-Ukraine war have pioneered the use of first-person view drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic jamming precludes remote control by human operators. But systems that automate reconnaissance and strikes are merely the most visible parts of the automation revolution. The ability to see farther and hit faster dramatically increases the information processing burden on military organizations. This is where decision support systems come in. If automated weapons improve the eyes and arms of a military, decision support systems augment the brain. Cold War-era command-and-control systems anticipated modern decision support systems such as Israel’s AI-enabled Tzayad for battle management. Automation research projects like the U.S.’s Semi-Automatic Ground Environment, or SAGE, in the 1950s produced important innovations in computer memory and interfaces. In the U.S. war in Vietnam, Igloo White gathered intelligence data into a centralized computer for coordinating U.S. airstrikes on North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency’s strategic computing program in the 1980s spurred advances in semiconductors and expert systems. Indeed, defense funding originally enabled the rise of AI. Organizations enable automated warfare Automated weapons and decision support systems rely on complementary organizational innovation. From the Electronic Battlefield of Vietnam to the AirLand Battle doctrine of the late Cold War and later concepts of network-centric warfare, the U.S. military has developed new ideas and organizational concepts. Particularly noteworthy is the emergence of a new style of special operations during the U.S. global war on terrorism. AI-enabled decision support systems became invaluable for finding terrorist operatives, planning raids to kill or capture them, and analyzing intelligence collected in the process. Systems like Maven became essential for this style of counterterrorism. The impressive American way of war on display in Venezuela and Iran is the fruition of decades of trial and error. The U.S. military has honed complex processes for gathering intelligence from many sources, analyzing target systems, evaluating options for attacking them, coordinating joint operations, and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is that countless human personnel everywhere work to keep it running. AI gives rise to important concerns about automation bias, or the tendency for people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was often misled by Vietnamese decoys. A state-of-the-art U.S. Aegis cruiser accidentally shot down an Iranian airliner in 1988. Intelligence mistakes led U.S. stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999. Many Iraqi and Afghan civilians died due to analytical mistakes and cultural biases within the U.S. military. Most recently, evidence suggests that a Tomahawk cruise missile struck a girls school adjacent to an Iranian naval base, killing about 175 people, mostly students. This targeting could have resulted from a U.S. intelligence failure. Automated prediction needs human judgment The successes and failures of decision support systems in war are due more to organizational factors than technology. AI can help organizations improve their efficiency, but AI can also amplify organizational biases. While it may be tempting to blame Lavender for excessive civilian deaths in the Gaza Strip, lax Israeli rules of engagement likely matter more than automation bias. As the name implies, decision support systems support human decision-making; AI does not replace people. Human personnel still play important roles in designing, managing, interpreting, validating, evaluating, repairing, and protecting their systems and data flows. Commanders still command. In economic terms, AI improves prediction, which means generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values, and commitments regarding real-world outcomes, but AI systems intrinsically do not. In my view, this means that increasing military use of AI is actually making humans more important in war, not less. Jon R. Lindsay is an associate professor of cybersecurity and privacy and of international affairs at the Georgia Institute of Technology. This article is republished from The Conversation under a Creative Commons license. Read the original article. View the full article
  21. Oil is a global market, so when prices rise in one place, they rise everywhere. The current war against Iran has already raised oil prices significantly. Mideast oil production has been slowed by efforts to close the Strait of Hormuz, a key route for oil tankers from the Middle East to the rest of the world, as well as by attacks—and fears of attacks—on oil production, storage, and shipment installations. This war has also disrupted the flow of liquefied natural gas from Qatar, which controls almost 20% of the global market. That also affects the world economy and supply chains. Shortages of natural gas affect production of fertilizer and aluminium, as well as other key materials. As a professor who has been studying oil price shocks for two decades, I’m often asked about the effects of rising oil prices on the U.S. economy. The answer to that question has changed over the past two decades. The global economic picture Countries that import much of their oil have to pay other countries for that imported oil. That was a problem for the U.S. back in the 1970s through the early 2000s. The U.S. sent billions of dollars a year abroad to oil-producing countries in the Middle East, Africa, and Latin America. That money built up other countries’ economies or sloshed around as financial surpluses that fueled financial market exuberance and asset bubbles that could suddenly pop. Oil imports increased the U.S. trade deficit in the 1970s and beyond. And as a result, U.S. industries suffered from high energy costs, which forced closures of major U.S. steel plants and iron and copper mines. Falling purchases of cars and other durable goods also stimulated worker layoffs. A shift in U.S. production Now, however, the United States is a major producer and exporter of oil and refined petroleum products. Every day, on average, the U.S. exports more than 6 million barrels of refined products and more than 4 million barrels of crude oil. The U.S. does still import some crude oil, most of which is heavy oil from Canada handled at certain American refineries on the U.S. Gulf Coast. Factoring in those imports, net U.S. oil trade balance is a positive 2.8 million barrels per day, as contrasted with the mid-2000s, when the balance was a deficit of 12 million barrels per day. U.S. production comes from 32 states—though mainly from the biggest producers: Texas, New Mexico, North Dakota, Alaska, Oklahoma, and Colorado. Because that revenue comes to companies in the U.S., the nation’s gross domestic product is less vulnerable to oil price increases than in the past, when high prices meant more U.S. dollars flowing overseas. A changed economy In addition to being less dependent on imports, the U.S. economy is much less oil-intensive than it used to be, producing more economic value with far less oil use today than in the past. And researchers at the U.S. Federal Reserve report that gasoline prices haven’t been a major contributor to U.S. inflation in recent years. That’s because there are lots of ways Americans use less gasoline, including telecommuting and remote work, online shopping, and using electric vehicles and delivery trucks that run on batteries or other fuels. Still, other economists disagree and say current oil prices, which are above $100 a barrel, could increase current U.S. inflation rates by as much as 1 percentage point. The mental toll Though the U.S. is economically less vulnerable to oil-price shocks, there is also a psychological factor. It’s hard not to feel pessimistic when gasoline prices at the local pump are already rising: Bulk market prices are already soaring amid hedging trades and speculative fervor among traders and wholesalers and on U.S. commodity futures markets. Americans feel pessimistic about consumer spending when gasoline prices are rising. And a study found that high gas prices even make people feel unhappy. Research also shows that people tend to put off major durable goods purchases, such as automobiles, when oil prices rise sharply. That could mean bad news for the U.S. auto industry. But it is also possible that high gasoline prices might encourage more Americans to consider buying electric cars. That could help the car companies that were having difficulty moving their electric-vehicle inventories. And for people who own electric vehicles, the war and its resulting price increases can be a reminder of the benefits of living gasoline-free. More broadly, the war might be yet another reminder of the benefits of diversifying energy sources away from fossil fuels. As my research shows, oil price shocks generally lead to greater investment in clean technologies. Amy Myers Jaffe is a director at the Energy, Climate Justice, and Sustainability Lab and a research professor at New York University. She is also a faculty affiliate of the Climate Policy Lab at Tufts University. This article is republished from The Conversation under a Creative Commons license. Read the original article. View the full article
  22. My friend Jessica Kriegel often warns her clients about the action trap, the urge to do something—anything—when things aren’t going well. Yet while taking action might make us feel better, it’s no guarantee we’ll get results. Many leaders fall into this trap, confusing taking action with making an impact, which can blind us to the underlying problem. The truth is that you can’t change fundamental behaviors without changing fundamental beliefs. It is, after all, beliefs, in the form of norms, that get encoded into a culture through rituals that drive behaviors. So unless you make a serious effort to understand the underlying problem you’re trying to solve, any action you take is unlikely to be effective. That’s why you need to start by asking good questions. While coming up with answers makes us feel decisive, those answers will close doors that should often be left open and explored. Good questions, on the other hand, can lead to genuine breakthroughs. With that in mind, here are three essential questions you need to ask before embarking on a transformational initiative. 1. Is this a Strategic Change or Behavioral Change? Every change effort represents a problem, or set of problems, to be solved. A strategic change starts at the top and needs effective communication and coordination for everybody to play their role, like the famous case at Intel, when Gordon Moore and Andy Grove made the fateful decision to move out of memory chips and bet the company on microprocessors. In a strategic shift, resistance is not particularly relevant. That doesn’t mean it doesn’t exist. As Grove recounted in his memoir, Only the Paranoid Survive, there were plenty at Intel who questioned the decision. But as chairman and CEO, Moore and Grove had full authority to allocate budgets and convert factories, and the change was going to happen whether people liked it or not. That’s why traditional change management methodologies, like Kotter’s 8 Steps or Prosci’s ADKAR (awareness, desire, knowledge, ability, and reinforcement), tend to be effective for strategic changes. Yet research shows that change itself has changed. In 1975, 83% of the average U.S. corporation’s assets were tangible assets, such as plants, machinery, and buildings, while by 2015, 84% were intangible, such as licenses, patents, and research. That means the changes we grapple with today have less to do with strategic assets like factories and equipment and a lot more to do with the things people think and do every day. Clearly, that changes how we need to approach transformation. Because often the most important changes involve collective action, which can be maddeningly complex. People adopt things when they see others around them doing so. Success begets more success, just as failure begets more failure. Big communication campaigns can ignite early resistance and backfire, while isolated individual efforts rarely scale. For collective action problems, we need to focus on, as network science pioneer Duncan Watts put it to me, “easily influenced people influencing other easily influenced people.” You build momentum and reach critical mass not through persuasion but through connection—by empowering early adopters and helping them influence others. 2. What are the Shared Values? Humans naturally form tribes. In a study of adults who were randomly assigned to “leopards” and “tigers,” fMRI scans revealed signs of hostility toward out-group members. Similar results were found in a study involving 5-year-old children and even in infants. Evolutionary psychologists attribute this tendency to kin selection, which explains how groups favor those who share their attributes in the hope that those attributes will be propagated. Our ideas, beliefs, and values tend to reflect the tribes we belong to, and sharing our thoughts and feelings plays a key role in signaling our identity and belonging to these groups. For instance, expressing an expert opinion can demonstrate alignment with a professional community, while sharing a moral stance can signal inclusion in a particular cultural group. Every organization has its own tribes, with their own values, customs, and lore. Divisions and functions develop their own norms, rituals, and behaviors, shaped by their institutional needs and priorities. As the workplace expert David Burkus told me, there isn’t really any such thing as an organizational culture because each organization contains multitudes of cultures. So before you start trying to evangelize a transformational initiative across those myriad cultures, with all of their internal biases and emotional trip wires, think about the values they share and build an inclusive vision. That may sound simple and straightforward, but it’s harder than it seems, which helps explain why so many transformational efforts fail. The problem is that when we’re passionate about something, we want to focus on how it’s different, because that’s what makes us passionate in the first place. We want to talk about how innovative and disruptive it is. Yet while that may honor the idea itself, it doesn’t do much for the people we want to adopt it. If we want them to share our priorities and aspirations, they have to believe that they share our values. 3. What are the Sources of Power? We like to think of transformation as a hero’s journey. There’s an alternative future state that we want to reach, and we’d like to think that if we’re good enough, we do all the right things, and our cause is righteous, we’ll eventually get to that place. Yet the truth is that change is always a strategic conflict between that future state and the status quo, which always has sources of power keeping it in place. These sources of power have an institutional basis and form pillars supporting the current state. It is only through influencing these pillars that we can bring about genuine change. Without institutional support, the status quo cannot be maintained. That’s why to build an effective transformation strategy, we need to identify the institutions that support the status quo, those that support the future state, and those that are still on the fence and as yet uncommitted. These institutions can be divisions or functions within an organization, customer groups, government agencies, regulators, unions, professional and industry associations, media, educational institutions—the possibilities are almost endless. What’s important is that they have power and/or resources that can either hold things up or move them forward. That’s what makes them viable targets for action. If you can influence the sources of power upon which the status quo depends, genuine transformation becomes possible. But make no mistake: As long as the forces upholding the status quo stay in place, nothing will ever change. The Power of a Question All too often, transformational initiatives are presented as a fait accompli. A strategy is set, a plan is made, and everything is announced with a lot of hoopla at a big launch event. Questions are treated as a nuisance, something to be batted away rather than engaged with. Change leaders, in an effort that seldom succeeds, try to act as if they have all the answers. Yet while answers tend to close a discussion, questions help us open new doors and lead to genuine insights. Asking “What kind of change is this?” is essential to building a strategy to overcome challenges. Investigating shared values is key to getting widespread buy-in. Analyzing sources of power is how you identify institutional targets for action. The truth is that every great breakthrough starts with a question. As a child, Einstein asked, “What would it be like to ride on a bolt of lightning?” which led to his theory of special relativity. He then asked a second question, “What would it be like to ride an elevator in space?” and that led to his theory of general relativity. Change leaders often feel they need to have all the answers, but what they usually need is to ask more—and better—questions. That’s the essence of the changemaker mindset: It’s not about building consensus around a plan and executing it, but about building a coalition to explore possibilities that lead to a better future. View the full article
  23. Figure comes as surging energy prices pose new threatView the full article
  24. War in the Middle East has prevented tens of thousands of people in Asia from getting homeView the full article
  25. Evidence suggests that the US was most likely to be behind the attack that killed over 100 childrenView the full article
  26. From Iran to private credit, things are unnerving but the wider financial system is better preparedView the full article
  27. Olivier Janssens accused of ‘public bribes’ with offer to locals as his development awaits final government approvalView the full article




Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.