Skip to content




All Activity

This stream auto-updates

  1. Past hour
  2. The standard agency reporting call is broken. Budgets are under extreme scrutiny, yet you still invest in vendors that celebrate arbitrary traffic gains while your sales pipeline stays flat. Optimizing for raw traffic volume is a legacy mindset that hides real commercial performance. The new mandate is to build an acquisition engine that influences buyers and protects your profit and loss (P&L) long before the transaction. To survive as a marketing leader today, you must ruthlessly challenge your internal teams and external agencies. Stop accepting reports on operational output and demand hard financial accountability: pipeline contribution, customer lifetime value (LTV) to customer acquisition cost (CAC) ratios, and reduced paid media dependency. The new path to purchase: Why traffic is bleeding your budget Chasing top-of-funnel informational traffic is a trap. If the users clicking your links aren’t actively buying, you’re paying for vanity metrics, not business outcomes. This happens because many buyers now use large language models (LLMs) to conduct deep research before they reach a search engine’s transactional layer. If you aren’t the cited authority during that AI-driven research phase, you’re invisible by the time buyers finalize their purchase decisions. The 7.48% reality: The power of the educated buyer The contrast in traffic quality is staggering when you look at the data. Across our enterprise client base, traditional organic search converts at 2.75%, while AI search converts at 7.48%. LLMs function as the ultimate trust proxy for today’s consumers. When tools like Gemini, ChatGPT, or Perplexity synthesize dozens of reviews, whitepapers, and Reddit threads to recommend your enterprise software, users trust the LLM’s consensus more than a branded blog post. AI engines arm consumers with comprehensive data, comparisons, and consensus. By the time a user clicks your AI citation, they’ve already made their decision based on your authority and are prepared to transact. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with From found to cited: Architecting the default recommendation Want to capture this 7.48% conversion rate? Your entire approach to digital asset creation must evolve. The strategy no longer centers on ranking among a list of links, but on being cited as the definitive option. To win the AI consensus, you must translate your marketing strategy into structured capital management. The old way: Publishing a 2,000-word blog post on top supply chain trends that generates 5,000 monthly visitors who bounce after reading and add zero value to your pipeline. The new way: Build a generative engine optimization (GEO) hub—a dedicated supply chain cost calculator page with proprietary data tables, expert author schema tagging your lead engineers, and strict answer-first formatting. LLMs require consensus and verifiable facts to generate confident answers. By structuring your digital assets with proprietary data and verifiable entities, you become the default recommendation. This approach may yield only 500 highly qualified visitors, but it gives LLMs what they need to cite you in vendor comparison prompts and captures buyers at the exact moment of commercial evaluation. Strategic ROI: Using citation authority to reduce ad spend It’s time to stop viewing SEO as a siloed traffic generator. You must treat organic citation authority as a strategic financial lever to reduce overall CAC. Align your organic assets with your highest-CAC paid campaigns. When organic search owns the AI Overview, your paid team can confidently pull back defensive ad spend. Here’s how to leverage paid and AI search: IF your brand becomes the default AI recommendation for a high-cost commercial category, THEN your paid team must aggressively reduce defensive brand bidding to slash overall cost per acquisition (CPA). IF paid search identifies a highly profitable long-tail query, THEN SEO must prioritize building a structured asset to organically capture that exact demand in the future. IF an LLM cites your competitor as the superior enterprise solution, THEN your paid team must immediately deploy targeted, bottom-of-funnel conquesting ads to intercept that user before the transaction, while the organic team rapidly engineers a proprietary data asset to win back the consensus. The monthly cannibalization review: Your immediate action item If your Head of Search and Head of Paid Media aren’t in the same room once a month mapping organic citations against paid brand bidding, you’re burning capital. Align your teams and channels. Routinely audit where you’re paying for clicks on terms where you already own the AI citation and the top organic spot. Treat this cannibalization review as a strict financial audit. Identify wasted defensive ad spend and immediately reallocate those dollars toward net-new market expansion. The enterprise scorecard: 3 questions to ask your agency tomorrow To regain control of your P&L, you must challenge your vendors to step up. Ask your agency these three questions tomorrow morning to see if they’re true business partners or order-takers. 1. What’s our citation share of voice for our highest-margin categories? Challenge your team to map their organic efforts directly to the AI research phase of your most profitable products. The answer you should hear: “We’ve mapped your 50 highest-margin queries. By securing the primary AI citation for these, we’ve generated $1.2 million in pipeline this quarter at a 3:1 LTV:CAC ratio.” 2. How is our citation strategy directly reducing our paid media CAC? Require teams to prove how their organic authority captures demand that would otherwise require paid ad spend. The answer you should hear: “By capturing the definitive AI citation for [category], we paused paid bidding on those terms. This reduced our blended CAC by 18% and saved $45,000 in defensive ad spend — which we’ve immediately reallocated to net-new market expansion.” 3. Are our digital assets structured for LLM extraction? Push your teams to explain their strategy for AI-driven search models. It’s no longer enough to publish standard web pages. The answer you should hear: “We’ve restructured your core commercial pages away from standard marketing copy, deploying answer-first’ frameworks, proprietary data tables, and expert author entities to ensure LLMs confidently extract and recommend your brand. This structural shift has increased our inclusion in commercial AI Overviews by 40% this quarter, directly feeding our bottom-of-funnel pipeline.” See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with Demand commercial outcomes, not operational output In a tough economy, SEO is a measurable business unit that must defend its budget with revenue data. Don’t accept operational output as proof of commercial success. Audit your reporting frameworks immediately. Stop accepting vanity metrics as evidence of success. Demand pipeline impact, LTV:CAC ratios, and a resilient acquisition engine. Any agency or internal team unwilling to tie its work directly to your P&L will become obsolete. Your job as an enterprise leader is to ensure your brand is cited as the authority long before the transaction begins. View the full article
  3. In SEO Pulse: AI Mode keeps more links inside Google, Maps adds conversational discovery, and Search Console rolls out automated brand segmentation. The post AI Mode Data, Ask Maps & Branded Queries Go Live – SEO Pulse appeared first on Search Engine Journal. View the full article
  4. Europeans look for ways to restart energy shipments as Iran’s new supreme leader vows to keep strait shutView the full article
  5. Carmaker’s technology means EVs can be ready almost as quickly as filling a fuel tank View the full article
  6. For years, SEO followed a fairly predictable playbook: create valuable content, optimize it for search engines, and compete for rankings on Google. But the way people discover information online is changing quickly. Tools like ChatGPT, Perplexity, and Gemini are introducing a new layer between users and search engines, where answers are generated and synthesized rather than simply retrieved. In a recent episode of the Get Discovered podcast, Joe Walsh, CEO of Prerender.io, sat down with Yoast’s Principal Architect Alain Schlesser to discuss what this shift means for SEO and online discoverability. Their conversation explores how AI answer engines are reshaping the search landscape and why many traditional SEO assumptions no longer fully apply. Alain shares insights on: How AI systems retrieve and surface information Why brands must rethink their online positioning, and What businesses should start preparing for as AI-driven discovery evolves over the next 12–18 months? Watch the full conversation between Joe Walsh and Yoast’s Principal Architect, Alain Schlesser, in the Get Discovered podcast below. Table of contents The new discovery layer: AI is becoming the gatekeeper Search is fragmenting beyond Google The “top results or nothing” reality Why Yoast launched AI visibility tracking The next evolution: AI agents making decisions SEO matters more than ever The new discovery layer: AI is becoming the gatekeeper “There’s now a layer in front of search that acts as a gatekeeper before you even hit those search engines.” That’s how Alain describes one of the biggest structural shifts happening in online discovery today. For years, the flow of search was straightforward: a user typed a search term into a search engine, the engine returned a list of results, and the user decided which link to click. But AI-powered systems have added a new layer to that process. From search queries to conversational discovery Today, many users begin their search journey by asking questions in tools like ChatGPT, Perplexity, or Gemini instead of typing traditional keyword queries. The AI system then determines whether it needs external information and may generate multiple search queries behind the scenes to retrieve relevant sources. The discovery flow now looks something like this: Previously: User → Search engine → Website Now: User → AI model → Search engine → Website → AI synthesis → User Instead of presenting a list of links, the AI model interprets and combines information before generating an answer. Alain explains this process in more detail in the podcast, highlighting how AI systems now act as a filtering layer between users and the web. Search is fragmenting beyond Google “We were in a rather comfortable position where we were only dealing with a monopoly search.” For much of the past two decades, SEO largely meant optimizing for one ecosystem: Google. Even though other search engines existed, Google dominated how people discovered information online. But that environment is changing. As Alain explains, AI systems are introducing a new layer of fragmentation in discovery. Different AI platforms rely on different combinations of search engines, indexes, and training data, which means results can vary widely between them. In practice, that means a brand might appear prominently in one AI system while barely showing up in another. For SEO teams, this marks a shift toward thinking about visibility across multiple AI-driven environments rather than just one search engine. Do checkout: Why does having insights across multiple LLMs matter for brand visibility? What hasn’t changed: The fundamentals of SEO Despite technological changes, Alain emphasizes that the core principles of good SEO remain intact. “You shouldn’t try to game the search engine. You need to create valuable content that humans actually want to read, and structure it so search engines can understand it.” At its core, search still aims to deliver the best possible answers to users. Whether the request comes from a person typing a query or an AI model generating one behind the scenes, the goal remains the same: surface useful, reliable information. That means SEO teams should continue focusing on fundamentals such as: high-quality content clear structure indexable and accessible pages content that satisfies the user’s search intent AI systems may change how information is surfaced, but they still rely on the same underlying signals of quality and relevance. The “top results or nothing” reality As the discovery landscape evolves, another important shift emerges in how AI systems interact with search results. “They don’t see the full search result page. What the LLM typically sees is just the five topmost elements per search query.” Unlike human users, AI systems typically work with a very small set of retrieved sources before generating an answer. That means if your content doesn’t appear among those top results, it may never reach the AI system at all. In a world where AI answers rely on the summarization of modern content, only the sources that make it into that small retrieval window influence the final response. This makes strong search visibility more important than ever. Ranking well isn’t just about earning clicks anymore. It determines whether your content is even considered when AI systems construct an answer. Why “safe” content strategies are no longer enough Even if your content reaches those top results, there’s another layer of filtering happening inside the AI model itself. Large language models compress enormous amounts of information during training. As Alain explains: What the model keeps are the dominant signal and the outliers. Everything in between is often compressed away as statistical noise. In the podcast, Alain uses this idea to explain why brands that try to be broadly acceptable or “safe” may struggle to stand out in AI-driven discovery. The takeaway is clear: in a world where AI systems summarize and compress information, having a clear and distinctive perspective becomes increasingly important. Why Yoast launched AI visibility tracking As AI systems reshape how information is discovered and summarized, a new challenge emerges for businesses: understanding how their brand appears in AI-generated answers. That’s the problem Yoast set out to address with Yoast SEO AI +, a feature designed to help businesses monitor how their brand shows up across major AI platforms. Earlier in this article, we explored how AI systems now sit between users and search engines, retrieve only a small set of results, and synthesize answers through the summarization of modern content. Together, these changes create a new discovery layer that is far less transparent than traditional search. As Alain explains in the podcast: “We need more visibility and observability into that AI-based layer to figure out what is going on there. Right now, it’s mostly a black box.” Unlike traditional search engines, AI systems don’t provide clear rankings, impressions, or click data that explain why a source was selected. Instead, answers are generated from a mix of retrieved content, training data, and model reasoning. For businesses, that makes it much harder to understand whether their brand is visible in AI-driven discovery. This is where AI visibility tracking becomes valuable. Rather than focusing only on search rankings, teams also need insight into how their brand is represented inside AI responses. Yoast SEO AI + helps surface that layer by allowing teams to observe how their brand appears across AI systems, such as ChatGPT, Perplexity, and Gemini. Must read: What is ChatGPT Search (and how does it use Bing data)? The goal is not simply to track another metric. It’s to help businesses understand how AI systems interpret and represent their brand. As Alain notes, visibility in AI systems can vary significantly depending on the platform, because each one relies on different combinations of: search engines indexes training datasets This means a brand might appear frequently in one AI system while barely showing up in another. Without visibility into those differences, it becomes difficult for teams to understand how their content performs in the new discovery landscape. In that sense, tools like Yoast SEO AI + are less about selling a new SEO feature and more about helping businesses observe a rapidly changing ecosystem where discoverability no longer happens only in search results. The next evolution: AI agents making decisions “What we will increasingly see is automated transactions where AI agents navigate websites and initiate actions on behalf of users.” So far, much of the discussion around AI and search has focused on how answers are generated. But according to Alain, the next phase of this evolution may go further. Over the next 12–18 months, AI systems may begin moving beyond answering questions and start performing tasks on behalf of users. Instead of guiding someone toward a website to make a decision, AI agents could increasingly compare options, interact with websites, and complete actions automatically. If that shift happens, the traditional customer journey could change significantly. Alain shares a fascinating perspective on what this might mean for businesses in the coming years in the full podcast conversation. SEO matters more than ever AI isn’t replacing SEO. If anything, it’s reinforcing why good SEO matters in the first place. What’s changing is the path between users and content. Instead of navigating search results themselves, users increasingly receive answers that AI systems retrieve, interpret, and synthesize. That makes strong fundamentals more important than ever. Businesses still need to focus on: valuable content clear structure discoverable and indexable pages a distinctive brand identity But the central question for SEO is evolving. It’s no longer just: “Can Google find my website?” It’s now: “Does the AI have a reason to remember my brand?” For more insights from Alain Schlesser on how AI is reshaping SEO, watch the full Get Discovered podcast episode. The post Rethinking SEO in the age of AI appeared first on Yoast. View the full article
  7. Shares in the preeminent graphics software company Adobe Inc. (Nasdaq: ADBE) are dropping significantly in premarket trading this morning following the company’s Q1 2026 earnings results. Yet it’s not the earnings themselves that are driving ADBE stock lower. It’s an announcement from the company’s CEO, Shantanu Narayen, who said he plans to exit the role he has held for over 18 years. Here’s what you need to know: What’s happened? On Thursday, Adobe announced the results of its first quarter for fiscal 2026. And for all intents and purposes, the results were of the caliber that would normally make investors happy: Total revenue of $6.4 billion (up 12% year-over-year) Diluted earnings per share (EPS) of $6.06 adjusted Total annualized recurring revenue (ARR) of $26.06 billion As noted by CNBC, for the quarter, Adobe’s total revenue and EPS figures exceeded investor expectations. The LSEG analyst consensus was that Adobe would bring in total revenue of $6.28 billion and achieve an EPS of $5.87. But if Adobe beat expectations, why is the stock down significantly this morning? Longtime boss is saying goodbye The main reason Adobe’s shares are in the red this morning is that in addition to the company’s earnings results yesterday, the Photoshop maker also announced that its long-running CEO, Shantanu Narayen, will be stepping down from the role. Without a doubt, the departure of Narayen is a loss for the company. As the departing CEO said in his resignation letter, Narayen has worked for Adobe for 28 years and led the company in the chief executive role for over 18 years. Narayen, who is 62, first became CEO in 2007. Adobe shares have grown more than 542% over that period, although they are down considerably since 2024. During Narayen’s 28-year tenure at Adobe, the company’s workforce has grown tenfold, going from 3,000 to 30,000 employees. Its revenue has grown from less than a billion dollars annually to more than $25 billion. Perhaps most critically, under Narayen’s chief executive tenure, Adobe transitioned from a company that primarily sold one-time software licenses to one that is now primarily subscription-based. While that move was not always popular with Adobe’s customer base, it has built a foundation for the recurring annual revenue the company now relies on. Narayen has long been a respected figure at Adobe, and within the broader tech industry, so it’s no surprise that his announced departure is having a negative effect on Adobe’s stock price. Narayen says he will stay on as CEO until Adobe’s board appoints a new one, at which point he will remain as Chair of the Board at Adobe. Adobe investors can’t shake AI anxieties Another element to Narayen’s departure that is likely causing investor jitters is that he is stepping down at a time when Adobe has never been more vulnerable. Narayen successfully navigated Adobe through the largely iPhone-driving death of its core Flash technology in the early years of his tenure as CEO. But now the company arguably faces an even more critical flashpoint. As AI tools become more advanced, investors are increasingly worried that they threaten the very foundations of Adobe’s business models. If an AI chatbot can generate a photo on demand, investors worry that customers will find less value in its stock photo service. And if AI can make edits and enhancements to photos and graphics simply by using natural language prompts, will fewer future creatives find less value in the company’s Creative Cloud software? To be fair, the AI threat isn’t a problem unique to Adobe. In the first part of this year, software companies of all stripes have been hit hard by investor worries that AI chatbots and their increasing capabilities will negatively impact enterprise and commercial software solutions. And while Adobe itself is of course embracing AI tools in its own products, the planned departure of the company’s beloved CEO at this critical time in the industry is making a lot of investors nervous today, as is evident by the company’s plunging stock price. Adobe shares crash on CEO’s planned departure As of this writing, in premarket trading, ADBE shares are down over 7.5% to $249.31 after yesterday’s announcement of Narayen’s upcoming exit. The company’s shares ended yesterday down 1.43% to $269.78. But even before today’s steep drop, Adobe’s shares have had a bad year. As of yesterday’s close, ADBE shares had lost nearly 23% of their value since the year began. Looking back over Narayen’s tenure as CEO, Adobe’s share price has had a stellar run. In December 2007, when Narayen became chief executive, ADBE shares were trading around the $42 range. By 2021, the company’s shares had peaked at nearly $700. But, particularly since 2024, the company’s shares have declined significantly, as fears over AI’s impact on legacy software companies have grown. Those fears are now something that Adobe’s next CEO, whoever that may be, will have to effectively manage. View the full article
  8. Today
  9. Being seen is a fundamental human need. We all can recall a moment when we truly felt “seen” by someone for who we are, and how good and empowering it made us feel. When this happens, it deepens our sense of belonging and makes us more connected to our work, and to others. And today, with so much of our attention being scattered and superficial, being truly seen is as surprising as it is refreshing. Research supports this: a sense of social belonging is one of the strongest predictors of engagement and performance at work. According to Deloitte’s Global Human Capital Trends report, 79% of organizations say that creating a sense of belonging is important or very important for their success. However, only a small percentage feel equipped to make it happen. This needs to change, now. Because when people feel seen, they feel validated, appreciated, and engaged. And that’s where leadership truly begins. According to Nina Bressler, Global Head of Service Academy at Hitachi Energy, “Every time we see someone fully, not just their role but in their humanity, we have the experience of learning and growing together. People lean in, share what they know, and risk showing what they don’t. In that mutual recognition, performance becomes a natural outcome of belonging.” A Personal Story: The Power of Sawabona In the Zulu language, there’s a greeting I love that captures this sense of belonging. It’s “Sawabona.” It means “I see you,” but it’s much deeper than that. It’s not just an acknowledgment or a greeting; it’s an affirmation of someone’s existence and humanity. The response to “Sawabona” is just as powerful: “Ngikhona,” which means “I am here.” This exchange conveys mutual respect, and sets the tone for meaningful connection and authentic interaction. For years, I sat on a leadership advisory board within the intelligence community, made up of accomplished experts across a variety of fields. We always sat at the boardroom table, putting our heads together to urgently tackle the high-stakes issues that needed our input. The pressure to perform was always stressful, and the environment felt as intimidating as it was inspiring. But one day, the mood changed. The chairwoman of our board, Renee, began our meeting with “Sawabona,” she said. This was definitely different from the typical call to order and reading of the agenda, and people were seemingly caught off guard. We all then said the response: “Ngikhona,” I am here. And immediately, people smiled. Not just because it was a little awkward, but because it was so … human. This exchange set the tone for the entire meeting. It was a kind acknowledgment of each person’s presence, and importance. That single act of recognition created an atmosphere where we could show up genuinely and engage deeply, not just as experts but as humans with unique experiences, values, and stories. Why Sawabona Matters for Your Team At work, we forget the power of seeing each other fully. I know I’m guilty of this, because I get, well, busy. We all focus on tasks, deadlines, and outcomes, but better outcomes happen when people feel seen as themselves. Research from BetterUp found that when employees experience a strong sense of belonging, organizations see: 75% fewer sick days 56% improved job performance 50% lower turnover risk These kinds of results are worth the risk of an awkward moment, in my opinion, no? Sawabona is rooted in the African philosophy of Ubuntu, which emphasizes both interconnectedness and mutual care. “I am because we are” speaks to the understanding that our individual worth is shaped by our connection to others. When we see each other, we strengthen the bonds that foster collaboration, innovation, and shared purpose. If you want your team to thrive, fostering a sense of Sawabona is key. Leaders who do this are recognizing people for who they are, not just what they produce. When you honor someone’s existence and humanity, you unlock their potential. How to Bring Sawabona to Work Incorporating Sawabona into your team culture isn’t about using the phrase as a token gesture. It’s about showing everyone mutual respect and authentic connection, even in small ways. Here’s how to start: Show Up Fully – Sawabona means showing up, not just physically, but emotionally and mentally. That means you don’t just show up and sit in the room; be engaged. When people feel their presence is valued, they’re more likely to show up as their best selves. Practice Active Listening – The foundation of Sawabona is truly listening. So, be attentive, ask thoughtful questions, and seem understanding. Celebrate Individuality – Everyone on your team is unique. Their perspectives, experiences, and backgrounds shape what they bring to the table. Take time to acknowledge what makes each person special. Let that perspective add to new ideas and solutions. Create Space to Share – People need to feel safe to express themselves. Create an environment where your team can give ideas, voice concerns, and add to the conversation without fear of judgment or rejection. The Radical Power of Being Seen The act of being seen is alarmingly radical in a world that frequently treats people as a means to an end. Sawabona rejects the transactional nature of work to focus on a deeper, more authentic human connection. Because people aren’t just cogs in a machine. They’re individuals with worth, complexity, and unique contributions. As a leader, it’s your responsibility to create an environment of support, because your success depends on it. Sawabona is a practice that says, “I see you for who you are, and I value your presence.” Next time you gather your team, start by greeting them with Sawabona, and watch how it transforms the way you work, collaborate, and connect. SEO Tags: Sawabona, Ubuntu leadership, team engagement, mutual respect, leadership culture, active listening, team empowerment, empathy at work, authentic leadership. View the full article
  10. Brandon Ervin, Director of Product Management for Google Search Ads, recently discussed campaign consolidation, AI Max, and what advertiser control looks like in 2026 on Google’s Ads Decoded podcast. The conversation was serious and informed, and reflected a product team that understands advertiser concerns and is actively working to address them. But the podcast is also incomplete. The gap between what Google said and what advertisers actually experience from their sales organization is large enough to warrant a direct response. Ervin’s team is doing genuinely good work, but the platform’s structural incentives haven’t changed. Google’s evolving product is creating problems faster than it can solve them. Performance is now measured on economic standards, shaping how a search ads audit is performed. Recent improvements to Google Search Ads Recentish improvements are genuine: Brand exclusions in Performance Max and Demand Gen. Site visitor and customer exclusions from PMax campaigns. Network-level reporting within bundled campaigns. Improved search term visibility. Brand and geo controls inside AI Max at the ad group level. Semantic modeling that doesn’t anchor on campaign or ad group IDs, reducing learning period risk during consolidation. These are meaningful. They are also solutions to issues introduced by bundling, opacity, and aggressive automation rollout. These products have been mercilessly shopped to advertisers since 2021, and the controls that make it usable arrived years after the sales push began. The ability to separate brand from non-brand traffic inside PMax/AI Max should not be framed as innovation. It restores a fundamental distinction that previously existed by default. The ability to see network performance inside a bundled campaign is not an expansion of control. It restores visibility that was removed. An audit must ask whether new tools are genuinely expanding control or merely reintroducing baseline transparency. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with Table stakes: What everyone agrees on Before the real audit begins, the fundamentals. These are uncontroversial and should already be in place: Run full ad extensions (sitelinks, callouts, structured snippets, image, call). Use automated bidding with intentional target-setting and conversion action selection (I recognize there are still holdouts here but seems crazy to me). Maintain negative keyword lists. Write ads relevant to the queries they serve. Audit automatically created assets for accuracy and brand safety. Cut Search Partners and Display expansion from Search campaigns. Separate brand and generic campaigns using brand controls. Exclude site visitors and past customers from prospecting campaigns where appropriate. Import offline conversion data (MQLs, SQLs, revenue, CLV, repeat rate,) to feed the algorithm downstream signals. Weight conversion values by actual downstream conversion rates. Account for mobile vs. desktop performance gaps. Those are table stakes. The real audit begins after that. What a 2026 search audit must focus on With the prevalence of AI, advertisers need to focus on reconstructing economic visibility in systems designed around aggregation and automation. Signal architecture In the podcast, Ervin says “control still exists, it just looks different.” Ad controls — where, when, and to whom ads appear — are still important and changing, some think, for the worse. The old ad controls — exact match, manual bids, network selection, and device modifiers — gave advertisers direct influence over where ads appeared and what they paid. However, the new controls are indirect. Control now lives in data quality, density, and selectivity. They influence the algorithm, but the algorithm makes the final call. An audit should focus on three questions: Quality: Are you importing revenue, pipeline stage, or qualified lead status, or only surface conversions? Density: Is there enough high-quality data for the model to learn from, or is it sparse and noisy? Selectivity: Are you intentionally limiting what Google can see, or are you passing everything indiscriminately? IMG With these new tactics, you only pass net-new customers or high-value customers. The majority of the time, it is better to just pass the densest and most predictive conversion set. Incrementality Google optimizes toward reported conversions, not incremental conversions. Brand search often captures existing demand. Retargeting often captures users already in motion. Pmax/AI Max frequently blends these signals. Ervin was asked: Are AI-driven campaigns over-indexing on warm brand traffic to inflate blended ROAS (return on ad spend)? He doesn’t dispute the problem, but points to partial solutions, including using brand controls, better theme your account, and looking at multi-campaign A/B testing. If incrementality is not measured, automation amplifies non-incremental signals. Marginal returns Google uses a blended cost-per-action (CPA). For example, the first $50K of spend might return a $30 CPA, while the next $50K might return $120. With automation, money is spent until the blended metric falls within tolerance, meaning the last dollar is not spent efficiently. The vast majority of advertisers are bidding far beyond what they should be and have no idea it is happening. An audit must: Plot spend against incremental conversions. Estimate marginal CPA at each spend tier. Identify diminishing return curves. Compare marginal CPA to lifetime value. A lower target makes the algorithm more selective, competing in fewer high-value auctions. Google doesn’t suggest this because that would mean less spend and lower bids are less effective in general. Query resolution and ability to lower targets On the podcast, Ervin acknowledges that some AI Max matches can “look a little wonky” and says his team is working on exposing the model’s reasoning. Query mapping has gotten meaningfully worse over the past several years: queries landing in the wrong ad groups, matching to keywords with different intent, and broad match pulling in traffic unrelated to the keyword. AI Max has accelerated this — there’s been an increase in the volume of irrelevant queries flowing through AI Max campaigns, with no connection to the advertiser’s business or keywords in the account. Meanwhile, Google’s recommendations consistently push toward broad matching and large themed ad groups. The issue is not whether broad match works, but whether high-value intent is being diluted in larger, broader ad groups. Fewer ad groups means that we cannot effectively or meaningfully lower targets without a massive structural negative schema, so performance differences have to be large enough to validate the new structure. An audit should: Extract full search term reports. Classify queries by intent tier. Compare CPA and lifetime value by query type. Quantify irrelevant or weakly related matches. Measure performance drift across match types. Network economics Performance Max and Demand Gen bundle multiple networks into single campaigns, but offer limited visibility into which networks drive results. This makes it hard to cut the underperforming ones. The slow rollout of network-level controls systematically benefits Google’s less competitive inventory. An audit must: Break out performance by network. Compare CPA and lifetime value by placement. Identify cross-subsidization. Determine whether weaker networks are relying on surplus from strong search inventory. Value redistribution Combining these elements in your audit will help you succeed in this new world of ad search: Non-incremental traffic inflates conversion counts, making performance look better than it is. Looser match types expand where ads appear, diluting intent precision and forcing fewer ad groups/spend and blanket-level targets/bids. No clean marginal return visibility means it is much more difficult to find the point of negative return Network bundling hides which channels actually perform. The cumulative effect is that the surplus value generated by your best inventory and high-intent, high-converting search queries gets redistributed across Google’s weaker inventory (i.e., Display, YouTube, Discover, Gmail, crazy tail queries). This is how to get a dwindling supply of valuable search queries to inflate the cost-per-clicks (CPCs) of low-quality inventory. The Ads Decoded episode: Is your campaign structure holding you back in the era of AI? View the full article
  11. That may sound defeatist, but unfortunately that’s just how the web works. Rankings slip, competitors improve, search intent shifts, and what was your best-performing article two years ago might be leaking traffic right now without you even noticing. This is…Read more ›View the full article
  12. Hello again, and welcome back to Fast Company’s Plugged In. On March 9, Jay Graber stepped down as CEO of Bluesky. She will become the social networking platform’s chief innovation officer, while Toni Schneider, a venture capitalist and former CEO of WordPress parent company Automattic, joins Bluesky as interim CEO. (I may be the last person left who also associates Schneider with Oddpost, an impressive browser-based email client he co-created way back before Gmail existed.) Graber explained her decision as stemming in part from a desire to turn the CEO role over to someone who can help scale up the platform. From November 2024 to January 2025, as Elon Musk’s role in Donald The President’s reelection prompted many Twitter users (including me) to hatch exit strategies, Bluesky added 10 million users. That turned out to be the peak of the network’s boom, at least so far; 10 million users is also how many it’s added in the past 12 months. It’s still growing, but not at the torrid pace that will get it to hundreds of millions of people anytime soon. If I had invested in Bluesky—which Schneider’s venture firm, True Ventures, has—I’d want to see it grow far larger. As an individual user, however, I find it quite pleasant at its current size. Maybe even cozy, in a way Twitter had stopped being long before Musk trashed it. (I also enjoy the even tinier Mastodon.) Should Bluesky ever get ginormous, I hope it manages to retain the intimacy that it kindles today. But I’m less curious about the future of Bluesky the social network than I am about the technology behind it. Called AT Protocol, it’s responsible for organizing all those users and posts so that the right people see the right stuff at the right time. And unlike the comparable infrastructure in place at behemoths such as Twitter, Facebook, and Instagram, it’s open. Anyone can create their own social network based upon AT Protocol, or remix an existing one (such as Bluesky) by tweaking its algorithm or other attributes. Users can preserve their personal social graphs even if they use several otherwise distinct networks based on the protocol. When I first talked to Graber in December 2023, Bluesky wasn’t yet fully open to the public, and had just 2.3 million members. She seemed as excited about AT Protocol as Bluesky itself, and told me she saw it as a potential antidote to social-media toxicity, moderation problems, and general user dissatisfaction with how the people who operate social networks do their jobs. If you didn’t like Bluesky as Graber managed it, you could switch to a version of the service powered by a different algorithm, or a wholly independent social network running AT Protocol. You wouldn’t even have to do so much as create a user account. From both a technological and cultural standpoint, that’s a way more grandiose goal than simply building a social network that’s bigger and better than Twitter. As someone who loved Twitter until I didn’t, I found it immensely appealing. Who wouldn’t want more control over their social presence? But a little over two years later, it remains a vision more than reality. Indeed, Bluesky has a festering reputation in some quarters as an obnoxious liberal bubble unwelcoming of other perspectives, which might not be a problem if people were remastering the network or creating new alternatives based on its technology. AT Protocol was hardly dead on arrival. There are hundreds of applications that use it, from Instagram and TikTok alternatives to a stock portfolio tracker to an app that puts Bluesky on your Apple Watch. Many are intriguing in their own right. But most are satellites revolving around Bluesky and its community, which was not the original idea. Even when I spoke to Graber in 2023, the possibility of an open social protocol changing everything was not exactly new. Mastodon, which turns 10 on March 16, is powered by ActivityPub, a standard with goals similar to AT Protocol. Meta incorporated a measure of ActivityPub support into Threads (kinda, sorta)—and it’s not clear how invested the company is in going further. Even more to the point, Twitter cofounder and former CEO Jack Dorsey has long said that he regrets that Twitter ever became a company. Instead, he contends, it should have been an open protocol all along. Toward the end of his time there, he channeled that belief into incubating two such protocols. One became Bluesky; the other is the lesser-known Nostr, whose homepage cheerfully acknowledges the challenge it faces with the tagline “An open social protocol with a chance of working.” I wish the best for everyone behind AT Protocol, ActivityPub, and Nostr, but I can’t help but wonder if the failure of the relatively small number of people interested in this stuff to coalesce around one protocol helps explain why progress has been so slow. (As computer scientist Andrew S. Tanenbaum waggishly put it in the 1980s, “The nice thing about standards is that you have so many to choose from.”) It’s as if the companies that made browsers had never agreed on the shared technological underpinnings that let us use Chrome, Safari, Firefox, or any of innumerable other options to explore the same World Wide Web. For now, I am attempting to stay active on Bluesky, Mastodon, and Threads, though it’s hardly a cakewalk. Openvibe, the app I used to post to all three, has become so unreliable lately that I’ve mostly given up on it. Flipboard CEO Mike McCue tells me that he wants to add crossposting to Surf—a wildly ambitious app, still in closed beta, that weaves together the entire internet into user-curated feeds—but is still figuring out how to do it well. The only long-term solution involves all of these networks—plus Twitter, Facebook, and many others yet to be born—settling on a protocol so universal that they all just work together, without 99.9% of us needing to stop and wonder why. I’m realistic about the daunting odds of this happening, but I haven’t given up. And I hope that Bluesky won’t either—regardless of where it goes under new management. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on fastcompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company MacBook Neo review: niceness on a budget Apple’s long-awaited laptop is even cheaper than the pundits expected, and still feels like a Mac. Read More → Phoenix has lived with Waymos longer than any U.S. city. Here’s what its mayor learned Mayor Kate Gallego talks about working with Waymo, redesigning cities for autonomous vehicles, and why robotaxis may reshape everything from parking to public transit. Read More → GoFundMe launches AI fundraising coach to help people raise more money The new tool drafts campaign messages, suggests titles and photos, and guides users on how to share their fundraiser. Read More → This new foldable phone may have upstaged Apple in the ‘zero-crease’ war Oppo’s Find N6 isn’t fully creaseless, but it’s close. Read More → OpenAI’s delayed ‘adult mode’ underscores the challenges of age-gating AI A lot is riding on OpenAI’s ability to separate older ChatGPT users from younger ones. Read More → The uncomfortable valley: Microsoft Teams emoji faces have got to go They don’t make the digital workplace more casual. They make it uncomfortably weird. Read More → View the full article
  13. The problem: ChatGPT doesn’t have “rankings”. At least not in any traditional sense. Its responses are probabilistic: different every time, with brands appearing and disappearing from one query to the next. According to research from SparkToro, there’s a <1 in…Read more ›View the full article
  14. As of yesterday, March 12, hundreds of thousands of innovators, disruptors, and leaders began descending on Austin for SXSW. If you search “Tech and AI” in this year’s schedule, you’ll find 185 results. That’s more than double the 80 AI sessions in 2024, the same year I wrote a Fast Company op-ed about how women have spent decades building the intellectual foundation of AI while receiving almost none of the credit. It was also the year that companies with at least one female founder raised $38.8 billion in venture capital funding which is a 27 percent increase from the year prior, but still not close to the high point in 2021 with a raise of $62.5 billion. Two years later and the gap—both in acknowledgement and investor funding—hasn’t closed. However, something else is happening and it’s worth paying attention to. There is a new wave of women who refuse to wait for the AI industry to become “fair” and “equal.” They are building their own companies, on their own terms, with a more authentic and purpose-driven design mentality. It’s not general-purpose AI; it’s gender-purpose AI. An important distinction Before you roll your eyes, the distinction matters more than you might think. By 2030—which is now only four years away—AI won’t just enhance companies’ business models. According to IBM, it will be the business model. Right now, that business model is being built unsurprisingly by male-dominated teams for general audiences. The truth is technology—as an industry and a concept—was never built for women. It was not built to prioritize or accommodate our visions. But that is changing. A new class of female leaders in AI is disrupting this model and demanding more room for gender-purpose AI and less patience for the influx of male-dominated teams building general-purpose tools. This is the year we move beyond celebrating their presence and start backing their vision with real investment. One of those women is Rana el Kaliouby, co-founder and general partner of Blue Tulip Ventures, who will deliver a keynote at this year’s conference titled “Why the Future of AI Must Be Human Centric.” She has spent more than two decades humanizing technology. As co-founder of Affectiva, she pioneered the field of Emotion AI, which reads human feeling through facial expression and vocal cues, and now as co-founder and general partner of Blue Tulip Ventures, she literally puts her money where her mission is and invests in early-stage startups building ethical AI that is good for people. The word “good” is subjective. But for too long, it’s been defined by the people building the problem, not solving it. The problem is also being solved by women like Valerie Chapman, CEO and co-founder of Ruth AI, an AI-powered career advancement platform. Last month, Valerie asked Sam Altman at an OpenAI builder town hall how AI can be used to fix the $1.6 trillion gender wage gap. His response was that AI should be an equalizing force in society and like Valerie pointed out in her recent op-ed, when AI is designed with intention, it can close the gap and it’s time to build it. What’s next As a fellow female founder helping brands understand and utilize AI—as a topic and technology—in their comms strategies, here’s what this shift tells me about where we are headed in 2026. Male tech leaders want AGI. Female tech leaders want gender-purpose AI. The second is more inclusive. When women build AI, they tend to ask different questions in the design and development stage. Questions like who is this actually for and who will benefit from these capabilities? The truth is artificial general intelligence, or AGI, is at least 10 years away and the race toward the “holy grail,” as Big Tech has coined it, should not hold as much power and influence as it does. Gender-purpose AI is a race toward something more rewarding and meaningful: relevance. What a concept—that we could have more technology that works for the people it claims to serve. The gender wage gap will not close with more women working in tech. It will close when more women are building tech. Representation matters every month, not just during Black History Month, Women’s History Month, or International Women’s Day. Women deserve representation in the very tools and technologies they depend on. With almost 78 million women in the American workforce, this is a demographic that has earned our time, attention and investment. Investment in gender-purpose AI means nothing without investing in the women who will build tomorrow’s innovations The increase in female founded and funded VC companies is a great step in the right direction. But the progress pipeline matters just as much if not more. We need more mentorship programs, technical education, access to capital for first-time female founders who have the vision but not a seat at the Big Tech table. To ensure we double down on gender-purpose AI as an industry, we have to prioritize and support the women who want to build what comes next. The milestones for women in AI aren’t just on stage. They are in hallways and in boardrooms. When women lead AI companies, the product looks different. Canadian computer scientist Joy Buolamwini pioneered ‘Gender Shades’ in 2018, which piloted an intersectional approach to inclusive product testing for AI and exposed racial and gender bias in Microsoft’s, IBM’s, and Amazon’s facial recognition systems and insisted they change. Rana built technology that reads human emotion because she believed machines should understand people, not just process them. These are real-world use cases that prove that whoever builds the technology determines what the technology does and who it serves. In 2026, women won’t be waiting for “the next big thing” because they will be the ones behind it. They will be the ones building the technology that addresses what male leaders have not addressed: equity, inclusion, and a redefinition of “good” that finally reflects what 51% of the world wants, needs and deserves. It’s time the other 49% joined us. View the full article
  15. The latest PPC Pulse highlights Google’s agency-focused Merchant Center rollout, Smart Bidding guidance for new campaigns, and emerging AI usage trends in PPC. The post Merchant Center Expands, Google Clarifies Smart Bidding, State Of PPC Report – PPC Pulse appeared first on Search Engine Journal. View the full article
  16. Usually the epitome of good humor, my friend was seething. She had devised a zany and creative marketing idea for her firm. Securing the budget, designing a content strategy, hiring a creative agency, and then doing all the related work had consumed Alex and her team for a full six months. This was on top of their already demanding jobs. And then the unthinkable happened. “Before the idea was announced, one of my coworkers, a PR guy, shared the idea—my idea—with the CEO and CMO.” I watched her pace around my kitchen, her face getting redder and redder. “While he didn’t exactly say he’d done the work himself, how he talked about it made it seem like it was all his.” “Did you tell anyone, go to your manager?” I asked. Alex stopped her pacing. “I did, and he said, ‘When you’re creative, people will steal your ideas—you should just get used to that fact.’” As we talked, I could hear that under Alex’s anger was something else—curiosity. About what this all meant. About what she could have, or should have, done differently. Was she the problem? Did she need to figure out how to play the game better? Was the PR guy the issue? Or her boss? And if it was her boss, did she need to quit? Those were the wrong questions. It’s not you or them. The problem lies in the norm of tolerating bad behavior. When workplaces say, “Creative ideas get stolen,” harm becomes a given, not a choice. Ideas get stolen because there’s no accountability. To be clear, sometimes an idea is just in the air, and two or more people come to it around the same time. And oftentimes, we create ideas together. I’m not talking about those moments. I’m talking about when it’s fully apparent what is happening—idea theft, where one party takes credit for the work of others—and how that theft is tolerated. Research shows that knowledge workers are keenly aware of idea theft; nearly one-third report having had it happen to them. Work often treats idea theft as no big deal. But the cost is real. • Integrity is lost when ideas are disconnected from their source. The depth of the concept or the completeness of the thinking is lost. Downstream decisions are made without the rootedness of the original inspiration. • Theft demotivates the next idea. When ideas are stolen regularly, idea generation shuts down because no one volunteers to be violated. And Alex’s boss was right about one thing: Alex will certainly create more ideas. People create when they feel safe enough to imagine something new. That—by definition—is why regulating bad behavior matters. The idea that was stolen? It became one of the firm’s most successful efforts that year. It inspired the company’s next ad campaign and even a Super Bowl spot. But they didn’t have any follow-up to this one-off success. Why? Because they no longer had Alex. The Counterintuitive Insight: We Can Take Care of Our Commons Most of us are taught to stay quiet. Don’t make a scene. Go along to get along. And when someone crosses a line—steals credit, dominates meetings, dismisses ideas—we assume someone in authority will fix it. But that assumption hides a deeper truth: the rules of our workplaces are not enforced by leaders alone. They are enforced by what we tolerate together. In 2009, political economist Elinor Ostrom won the Nobel Prize in economics for proving something that ran against decades of economic orthodoxy. Before her work, economists widely believed in the “tragedy of the commons”—the idea that when a resource is shared, individuals will inevitably overuse it and destroy it. The only solution, it was thought, was top-down control: private ownership or government regulation. Ostrom proved otherwise. She showed that communities, left to their own devices, often devise highly sophisticated systems of shared management—systems where consequences don’t come from a distant authority but from the group itself. The people who depend on each other can also hold each other accountable. Her work wasn’t about office politics. But it applies. Every team shares something. It might not be water or grazing land. But trust. Energy. Credit. Voice. And just like natural resources, these intangible goods are depleted when people act only in their own interests at the expense of shared interests. When a manager takes all the credit. When someone interrupts constantly. When emotional labor always falls on the same shoulders. What Ostrom teaches us is that we don’t have to live inside that dynamic. We can protect shared goods—not with permission from the top, but through practices we design ourselves. Through consequences we create and apply together. Shared spaces survive when the people inside them protect them. Change the Norm When something harmful happens at work, our instincts split: ignore it or wait for someone in charge to handle it. But silence has a cost. It makes us complicit in what we ache to change. Monica Lewinsky—dragged through the mud of a scandal she didn’t create alone—calls on us to be upstanders: people who don’t just stand by, but stand up. Who see cruelty and choose courage. Who see harm and refuse to treat it as normal. Research shows that when bystanders step in, bullying stops within seconds—proving that empowering peers to act can cut bad behavior in half. What we allow becomes the rule of the room. When someone steals an idea, and no one says anything, the norm survives. When someone names it—calmly, clearly—the rule changes. But let’s be clear: This isn’t work any of us do alone. If bad behavior is tolerated, it grows. When it meets consequences, it stops. Bad behavior isn’t mysterious—it’s simply a crime of opportunity, repeated when no one intervenes. This is not a personal problem. It’s a social problem. It’s up to those who see it to act—to create the consequences. Not just to protect the harmed, but to stop the harm from spreading. Behavior doesn’t change because people suddenly become better. It changes because someone names what’s happening and refuses to treat it as normal. When you do, you won’t do it alone. Another person will join in. And then another. Until teams decide, we can be clear, fair, and firm with each other. That our shared space is worth defending, protecting. Let yourself run toward that danger, not away from it. Adapted from the book Our Best Work: Break Free from the 24 Invisible Norms That Limit Us, by Nilofer Merchant. Copyright © 2026 by Nilofer Merchant. Reprinted by permission of Harper Business, an imprint of HarperCollins Publishers. View the full article
  17. We recently started a small project to clean up how parts of our systems communicate behind the scenes at Buffer. Some quick context: we use something called SQS (Amazon Simple Queue Service. These queues act like waiting rooms for tasks. One part of our system drops off a message, and another picks it up later. Think of it like leaving a note for a coworker: "Hey, when you get a chance, process this data." The system that sends the note doesn't have to wait around for a response. Our project was to perform routine maintenance: update the tools we use to test queues locally and clean up their configuration. But while we were mapping out what queues we actually use, we found something we didn't expect: seven different background processes (or cron jobs, which are scheduled tasks that run automatically) and workers that had been running silently for up to five years. All of them doing absolutely nothing useful. Here's why that matters, how we found them, and what we did about it. Why this matters more than you'd thinkYes, running unnecessary infrastructure costs money. I did a quick calculation and for one of those workers, we would have paid ~$360-600 over 5 years. This is a modest amount in the grand scheme of our finances, but definitely pure waste for a process that does nothing. However, after going through this cleanup, I'd argue the financial cost is actually the smallest part of the problem. Every time a new engineer joins the team and explores our systems, they encounter these mysterious processes. "What does this worker do?" becomes a question that eats up onboarding time and creates uncertainty. We've all been there — staring at a piece of code, afraid to touch it because maybe it's doing something important. Even "forgotten" infrastructure occasionally needs attention. Security updates, dependency bumps, compatibility fixes when something else changes. This led to our team spending maintenance cycles on code paths that served no purpose. And over time, the institutional knowledge fades. Was this critical? Was it a temporary fix that became permanent? The person who created it left the company years ago, and the context left with them. How does this even happen?It's easy to point fingers, but the truth is this happens naturally in any long-lived system. A feature gets deprecated, but the background job that supported it keeps running. Someone spins up a worker "temporarily" to handle a migration, and it never gets torn down. A scheduled task becomes redundant after an architectural change, but nobody thinks to check. We used to send birthday celebration emails at Buffer. To do this, we ran a scheduled task that checked the entire database for birthdays matching the current date and sent customers a personalized email. During a refactor in 2020, we switched our transactional email tool but forgot to remove this worker—it kept running for five more years. None of these are failures of individuals — they're failures of process. Without intentional cleanup built into how we work, entropy wins. How our architecture helped us find itLike many companies, Buffer embraced the microservices movement (a popular approach where companies split their code into many small, independent services) years ago. We split our monolith into separate services, each with its own repository, deployment pipeline, and infrastructure. At the time, it made sense: each service could be deployed on its own, with clear boundaries between teams. But over the years, we found the overhead of managing dozens of repositories outweighed the benefits for a team our size. So we consolidated into a multi-service single repository. The services still exist as logical boundaries, but they live together in one place. This turned out to be what made discovery possible. In the microservices world, each repository is its own island. A forgotten worker in one repo might never be noticed by engineers working in another. There's no single place to search for queue names, no unified view of what's running where. With everything in one repository, we could finally see the full picture. We could trace every queue to its consumers and producers. We could spot queues with producers but no consumers. We could find workers referencing queues that no longer existed. The consolidation wasn't designed to help us find zombie infrastructure — but it made that discovery almost inevitable. What we actually didOnce we identified the orphaned processes, we had to decide what to do with them. Here's how we approached it. First, we traced each one to its origin. We dug through git history and old documentation to understand why each worker was created in the first place. In most cases, the original purpose was clear: a one-time data migration, a feature that got sunset, a temporary workaround that outlived its usefulness. Then we confirmed they were truly unused. Before removing anything, we added logging to verify these processes weren't quietly doing something important we'd missed. We monitored for a few days to make sure they were not called at all, and we removed them incrementally. We didn't delete everything at once. We removed processes one by one, watching for any unexpected side effects. (Luckily, there weren't any.) Finally, we documented what we learned. We added notes to our internal docs about what each process had originally done and why it was removed, so future engineers wouldn't wonder if something important went missing. What changed after clean upWe're still early in measuring the full impact, but here's what we've seen so far. Our infrastructure inventory is now accurate. When someone asks, "What workers do we run?" we can actually answer that question with confidence. Onboarding conversations have gotten simpler, too. New engineers aren't stumbling across mysterious processes and wondering if they're missing context. The codebase reflects what we actually do, not what we did five years ago. Treat refactors as archaeology and preventionMy biggest takeaway from this project: every significant refactor is an opportunity for archaeology. When you're deep in a system, really understanding how the pieces connect, you're in the perfect position to question what's still needed. That queue from some old project? The worker someone created for a one-time data migration? The scheduled task that references a feature you've never heard of? They might still be running. Here's what we're building into our process going forward: During any refactor, ask: what else touches this system that we haven't looked at in a while?When deprecating a feature, trace it all the way to its background processes, not just the user-facing code.When someone leaves the team, document what they were in charge of, especially the stuff that runs in the background.We still have older parts of our codebase that haven't been migrated to the single repository yet. As we continue consolidating, we're confident we'll find more of these hidden relics. But now we're set up to catch them and prevent new ones from forming. When all your code lives in one place, orphaned infrastructure has nowhere to hide. View the full article
  18. Zyxel Network's new FWA7 solution is packed with loads of features and performance - and it could be just what ambitious WISPs are looking for. The post Zyxel Networks targets US & UK WISP/MSP markets with world’s first Wi-Fi 7 standard power 6 GHz dual-band PtMP FWA solution appeared first on Wi-Fi NOW Global. View the full article
  19. 7SIGNAL says the solution to AIOps involves making network-wide Wi-Fi (and other) data accessible to AI platforms via MCP. The post New paper from 7SIGNAL: Maximise enterprise networking operational benefits with MCP-based AI integration appeared first on Wi-Fi NOW Global. View the full article
  20. During an end-of-the-fiscal-year spending spree last year, the Department of Defense (DoD) dropped some dough on new Herman Miller furniture. The DoD spent $60,719 for chairs from the Michigan furniture manufacturer last September, according to the report from the watchdog group Open The Books, including at least one $1,844 Aeron Chair, the brand’s popular, ergonomic, fabric-meshed office chair. The Herman Miller purchases were just a small fraction of the record $93 billion detailed in the report, which was more than the DoD has spent in a single month since the group’s data goes back to 2007. For Herman Miller, its share was peanuts, considering the company is the longest holder of a federal government contract for office furniture, at more than 40 years. (Herman Miller did not respond to a request for comment by publication.) The DoD goes on an annual spend-it-or-lose-it buying spree every fall no matter the president or party, Open The Books found over a decade of tracking it. The group called on Defense Secretary Pete Hegseth to rein in the use-it-or-lose-it approach the agency takes to its budget. Instead, 2025’s spending was a record. While some line items highlighted in the report seem like clear attempts to run up expense reports before the time runs out, like $98,000 on a Steinway & Sons grand piano and $2 million on Alaskan king crab, office furniture purchases at least make practical sense. With nearly 3 million military and civilian employees, the DoD is one of the largest employers in the U.S. That’s a lot of butts in seats, which means a big budget for chairs and other office furniture. Open The Books found furniture purchases spike 564% every September over the monthly average across the other 11 months of the year. Last year, the DoD spent $225.6 million on furniture in total. Herman Miller’s parent company MillerKnoll had obligations of more than $15 million in the last fiscal year, and the DoD makes up 80% of its awarding agencies. In the past, the Defense Advanced Research Projects Agency (DARPA) spent nearly $250,000 on Herman Miller furniture for a conference room “refresh,” according to Open the Books, and Federal Emergency Management Agency (FEMA) spent $284,000 on Herman Miller furniture for its conference center. For defense officials looking to set up an office, Herman Miller offers DoD-approved options for everything from desks, carts, and lockers to nurses’ stations, pharmacies, and labs. This isn’t the kind of workplace interior design work that Ikea was built to handle. For Herman Miller, though, its volume of government sales isn’t what it used to be. Federal spending records since 2008 show MillerKnoll’s transactions peaked during former President Barack Obama’s administration, with obligations totaling more than $174 million dollars in 2010, a figure that dropped to a low of more than $12 million in 2023. While the DoD might not be as loyal a customer as it once was, Herman Miller has found other government work elsewhere. The company says it’s one of the largest furniture suppliers to state and local government agencies. View the full article
  21. Here is a number worth sitting with: 295%. That’s how much U.S. app uninstalls of ChatGPT surged in a single day last month, after OpenAI struck a deal with the Department of Defense that its rival Anthropic had publicly refused to sign. In the same 24-hour window, Claude’s downloads jumped 51%. By that evening, Anthropic’s app had climbed to No. 1 on the U.S. App Store, leapfrogging 20 apps in under a week. One values-driven decision. One weekend. A measurable transfer of market share. Most of the coverage framed this as a political story. It isn’t. Or at least, not only. It’s also a brand loyalty story. And it tells us something important about the category war that’s actually being fought in AI, one that has very little to do with compute power. The Switching Cost Nobody Is Naming Brand strategists understand switching costs intuitively. In banking, insurance, enterprise software—anywhere the friction is high—emotional and values-based factors end up doing as much heavy lifting as product performance. The category with the highest rational switching cost often becomes the category where trust matters most. AI is moving toward that same dynamic, faster than most people are ready for. An AI platform doesn’t just perform tasks. It accumulates context. It gets to know us—how we think, our shorthand, our working rhythms. For enterprise users in particular, this depth compounds quickly. The longer a business embeds an AI platform into its workflows, the higher the exit cost becomes, not just technically, but cognitively, culturally, and even emotionally. There’s a name for this: the relational cost. It’s the switching cost nobody in the AI conversation is actually naming. And in any high-switching-cost category, the ‘brand’ question—what does this company stand for, and do I trust it—eventually becomes the definitive one. Operationalizing Values Is Not the Same as Talking About Them The consumer response to the DoD news didn’t come out of nowhere. It was the visible payoff of a positioning strategy years in the making. Anthropic has been making a consistent, operationalized argument about what kind of company it is—and backing it with choices that have visible cost. The Claude Constitution is a publicly available, inspectable training framework. Not a mission statement—a framework. Anthropic’s Economic Index analyses AI adoption across sectors and positions the company as a participant in the difficult societal conversation about AI’s impact on employment, not just a product vendor. These are category-shaping moves, not PR. The market had been registering these signals quietly, long before last month. Independent analyses suggest Claude holds 32% of enterprise AI usage, significantly disproportionate to its 3.5% consumer footprint. Enterprises—more deliberate, more risk-averse, more consequentially exposed to AI failure—have already been choosing Claude at scale. That gap between enterprise and consumer adoption isn’t a coincidence. It’s a trust premium. The Cost of Caring It’s easy to have values when they cost you nothing. For Anthropic, these came with a $200 million price tag. That’s the suggested value of this contentious Pentagon contract. Furthermore, the supply-chain risk designation—a label the The President administration has now formally applied, and which Anthropic is challenging in court—threatens hundreds of millions more across broader government contracts. This damaging designation, historically reserved for foreign adversaries like Huawei, has never before been applied to an American company. That is a real commercial cost, not a hypothetical one. But what looks like a ceiling from one angle looks like a moat from another. In the weeks since the dispute went public, Anthropic’s revenue run rate has nearly doubled—from $9 billion at the end of 2025 to almost $20 billion today, according to Bloomberg. The government closed a door. The market opened several more. That is not a coincidence. That is what trust, operationalized and defended under pressure, looks like as a growth strategy. So What Does This Mean for Your Business? The question that should be on the table in every leadership meeting right now: which AI platforms are you building on, and have you thought seriously about what that association means for your brand? AI platforms are no longer neutral infrastructure. They carry values, make visible choices, take public positions. The AI your business relies on is becoming part of your brand. When a platform’s ethics come into question—as they periodically and inevitably will—that exposure travels upstream to every company in its orbit. This creates both a risk conversation and a strategic opportunity. Evaluating AI partners on trust and values criteria, not just capability benchmarks, is the kind of decision that looks obvious in hindsight and prescient in the moment. The Brand Codes Are Being Written Now Early positioning in emerging categories hardens fast. The companies that define what a space stands for, not just what it does, shape expectations for years. We saw it with social media, with streaming, with fintech. In each case, the brands that defined the category’s values, not just its features, built loyalty advantages that capability alone couldn’t disrupt. AI is at that moment. The conversation about what kind of category this is going to be is happening now, in public, in real time. Stop asking which AI is most capable. Start asking which AI your business can afford to be associated with. Because our whirlwind romance with AI is fast turning into something more serious; committed, often exclusive, long-term relationships where platform loyalties get more embedded and more entrenched by the day. Choose carefully. Credibility compounds faster than compute. The data is already proving it. View the full article
  22. At a time when mainstream brands live in fear of getting dragged into a contentious political landscape, there’s something curiously benign, almost feel-good, about “Florsheimgate.” If you’ve somehow missed it, this particular instance of an involuntary pop-culture brand cameo came about following press reports this week that President Donald The President has become an enthusiast—and de facto brand ambassador—for Florsheim dress shoes, gifting pairs to cabinet members and media allies. The upshot is that less-than-$150 Florsheims have become “the hottest and most exclusive MAGA status symbol,” according to The Wall Street Journal. But more to the point, administration insiders who don’t find the brand “hot” in the slightest, and would likely prefer more luxurious footwear, are sticking with the shoes The President gives them—even, weirdly, if they don’t fit. This naturally caught the attention of MAGA critics, who promptly lit up social media with mockery of the 79-year-old president’s taste and allegedly Stalinesque bullying of his compliant minions. And this included some collateral damage for the venerable, and some might say dowdy, Florsheim. But really, even the inevitable dunking (what a dated mall brand!) seemed good-humored. “Florsheim,” one Bluesky user wrote. “When a Gift From Wicks n’ Sticks Just Isn’t Enough.” Others added comments like “florsheim didn’t go out of business in like 1978?” and “Florsheim shoes? Man, that guy’s brain really is stuck in the 80’s” and “Ok I give. What’s Florsheim.” And of course plenty of memes. I get the feeling we’ll be discussing Florsheim shoes today. — 𝕊𝕦𝕟𝕕𝕒𝕖 𝔾𝕦𝕣𝕝 (@sundaedivine.lol) 2026-03-11T10:18:31.168Z Funny, but well short of a dangerous brand backlash. Nobody’s demonizing Florsheim-wearers in general, putting out videos of shooting up loafers, or organizing a grassroots brand-oppo campaign on behalf of Vuitton loafers. To the contrary, it seems, at worst, to be a short-term, almost charming free publicity reminder to those who don’t know that the brand is still around—and, apparently, thriving. Turns out, Florsheim enjoyed “record” wholesale sales of $92 million in 2025, according to parent Weyco Group’s most recent earnings release and call earlier this month, “demonstrating resilience in a declining market for non-athletic brown shoes.” The Florsheim brand has a choppy history dating all the way back to 1892. Worn by everyone from Harry Truman to Michael Jackson, it’s a brand deeply embedded in American consumer culture, a staple brand of the suburban shopping mall’s heyday. But it also endured a bankruptcy filing in 2002. It’s now part of the Weyco Group, whose CEO is Thomas Florsheim Jr., a fifth-generation Florsheim. (Sales of other Weyco brands Nunn Bush, Stacy Adams, and Bogs were down last year, dragging down revenue and earnings for the company overall.) Weyco did not respond to an inquiry from Fast Company, but CEO Florsheim told The Journal he was not aware of The President’s orders (and declined further comment). In the conference call (which predated this week’s The President fandom news), the CEO was upbeat, calling Florsheim “one of the few men’s [shoe] brands outside of the athletic category to sustain this level of post-pandemic growth. While the non-athletic brown shoe category has been in secular decline, Florsheim has bucked the trend and gained market share.” Whether that’s true or not, the association with The President seems more like a passing entertainment than a brand controversy. At a moment of profound tension brought on by war and the threat of a new global oil crisis, Florsheimgate didn’t land like a point of contention; it was more like comic relief. In an interesting footnote, Weyco noted in its earnings call that tariff impacts—which CEO Florsheim has groused about in the past—“significantly affected gross margins” in 2025. Those tariffs have since been judged illegal by the Supreme Court, and the company “is optimistic about retrieving $16 million from tariff refunds.” Maybe The President’s Cabinet members should keep a spare pair of another brand’s loafers at the office, just in case Florsheim goes out of fashion at the White House. View the full article
  23. The latest accusations suggest a manager instructed a loan officer to photograph confidential data and process it in ChatGPT to avoid detection. View the full article
  24. For the first time that I can remember, this year I was completely enthralled by the Winter Olympics. In fact, I don’t think I’d ever watched the Winter Games before, but it really caught my attention this go-round. One event that really stood out for me was the skeleton. For the uninitiated (like I was just a month ago), the skeleton is a slide-based sport where athletes lie face down, headfirst, on a small slide going 80 mph down an icy, declining slope. On the surface, it doesn’t look like it requires much from the athlete but to lie down and hang on for dear life until crossing the finish line. But upon further inspection, the sport is far more intricate, requiring the athlete to make subtle adjustments with their shoulders, knees, and even their toes to control and steer the sled. The slightest weight shifts can make the difference between first place and last. As if the Olympics weren’t competitive enough, the margin of error in this event is miniscule. I was fascinated, particularly about the idea of finding balance. There’s so much talk about work-life balance, work-self balance, and just about any other “something-something” balance where the two somethings seem to be at odds with each other. To find balance, we make subtle adjustments throughout our days and weeks—blocking off time, making time, taking time—in hopes of steering our lives and maintaining control of ourselves. However, according to Misan Harriman, balance is less of an “act” and more of a series of choices that informs action; it’s not what we decide to do but who we choose to be. Raw and honest moments of humanity Harriman is a photographer, activist, and Oscar-nominated filmmaker whose work has been prominently featured in publications like Vogue, celebrated on awards stages, and widely shared throughout the zeitgeist. His work captures the raw and honest moments of humanity—in resistance, grief, joy, and all the many manifestations of our true existence. Our conversation with Harriman on the From the Culture podcast explored the balancing act of profitability and principle, where he argues that “profit at all costs” carries a heavy price tag that can cost us our authenticity. We make decisions at work that call into question the integrity of who we perceive ourselves to be outside of the office. Tech CEOs sell products to schools that they hardly ever let their own children use. Managers treat their subordinates in ways that would anger them if it were something their spouse had to endure. Whether it’s the way we communicate with peers or manage our presentation of self at work, far too often there is an imbalance between ourselves—who we say we are and how we are. Our inconsistent performances of self not only cause harm in our work but can also cause a crisis of authenticity. Fittingly, sociologist Erving Goffman likens the theatrical stage to the dynamics of social living, borrowing from William Shakespeare’s comedy As You Like It, where he writes, “All the world’s a stage, and all the men and women merely players.” Our presentation of self, as Goffman posits, is a choice we make. We decide which character we choose to play in social life. This choice subsequently demands a series of decisions that coincides with said character. The costumery. The script. The mannerisms. The exits and entrances. They are all by-products of the character we choose to play. That is to say, who we choose to be informs how we choose to be. A choice of character Through this lens, the balancing act of work-life or work-self is a choice of character and commitment to it. And although we attempt to balance the existence of two characters with adjustments here and there, like the athletes in the skeleton event, these seemingly subtle shifts of self can have tremendous impact. The idea then is to remain true to self, one character that is consistent despite the context. This is, after all, the definition of authenticity. As Goffman warns, we should pay mind to the mask we choose to wear because if we aren’t careful, our mask could soon become our face. This means we have agency in the matter. We can decide who we want to be and, therefore, how we’re going to behave. We have a choice; but when we don’t choose, the context will certainly choose for us. Check out our full conversation with Misan Harriman on the latest episode of From the Culture here on Spotify or wherever you get your podcasts. View the full article
  25. Google's Gary Illyes offered a candid overview of Googlebot, explaining there are hundreds of crawlers that are not publicly documented. The post Google Says They Deploy Hundreds Of Undocumented Crawlers appeared first on Search Engine Journal. View the full article
  26. The modern workplace is designed for early risers. But only about 30% of people are true morning types. The rest fall somewhere in between—or toward the later end of the spectrum (those who think, create, and perform best later in the day). Through my work implementing circadian health and performance in organizations in 17 countries, I’ve discovered three strategies to help night owls create workdays that protect their energy, creativity, and well-being so they can perform better and share their true talents. 1. Give yourself a slow start As a night owl, your day simply starts later—and that’s by design. Give your body time to wake naturally and ease into the day without rushing. Morning daylight (outside) can help, as it’s your internal clock’s strongest synchronization signal. Get at least 20 minutes of daylight before noon. This exposure won’t turn you into a morning person, but it helps stabilize your rhythm, reduce social jet lag, and boost alertness when your day begins. Magne, a late chronotype I work with, thrives when he can start his day quietly and let his energy build through the morning. When he aligns his schedule with his rhythm—working deeply in the afternoon and protecting calm mornings—his focus and creativity soar. If your organization’s rhythm starts earlier than yours, make micro-adjustments: Move demanding work to the afternoon, take short daylight breaks, or negotiate one or two later start times per week. Even small shifts can make a measurable difference to your sleep quality and mood, because they help protect the REM sleep that fuels creativity and emotional balance. Most of your REM sleep happens in the final hours of the night—so when an alarm cuts off those last one to two hours, you can lose up to half of your REM. Small changes like these help you reclaim that vital recovery time and bring your body back in sync. 2. Do your hardest work later Your performance peaks in the afternoon or evening. Use those hours intentionally for strategy, problem-solving, and creative work. If you have some flexibility to set your work schedule, protect late-day focus blocks where you can work without interruption. And always set a clear end time so that your late energy doesn’t steal the sleep that refuels it. You thrive when working in the evenings, but turn off your computer at least one hour before you go to bed. The light from screens delays melatonin and can push your sleep window even later. 3. Schedule afternoon exercise Your body is at its physical best later in the day. Research shows that late chronotypes perform up to 26% better in the afternoon and evening compared to the morning. Strength, flexibility, and coordination all peak as your temperature and alertness rise. That’s why it’s important to schedule exercise in the afternoon or early evening, when your body is naturally primed. It’s not just better for performance—it also supports sleep quality by helping you wind down gradually. Evenings are also when your social energy is highest. Many cultural and social activities—concerts, theater, dinners, and gatherings—are already designed for night owls. When you align your day with your biology, you protect your energy and unlock your full potential. And when leaders replace moral judgment with biological understanding, they unlock trust, creativity, and genuine performance. As jazz legend Miles Davis put it: “Sometimes it takes a long time to sound like yourself.” Designing your workday around your chronotype is one of the fastest ways to sound—and work—like yourself. View the full article
  27. The U.S. military was able “to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela. While Claude is only a few years old, the U.S. military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today. In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the U.S. Navy, I find that digital systems are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses. Myth and reality in military AI Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the people who use them. In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of considerable debate. Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration, and cybersecurity. Claude is an example of a decision support system, not a weapon. Claude is embedded in the Maven Smart System, used widely by military, intelligence, and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities. The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions. Researcher Craig Jones explains how the U.S. military is using artificial intelligence in its attack on Iran, and some of the issues that arise from its use. The long history of military AI Weapons with some degree of autonomy have been used in war for well over a century. Nineteenth-century naval mines exploded on contact. German buzz bombs in World War II were gyroscopically guided. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the U.S. Patriot system, have long offered fully automatic modes. Robotic drones became prevalent in the wars of the 21st century. Uncrewed systems now perform a variety of “dull, dirty, and dangerous” tasks on land, at sea, in the air and in orbit. Remotely piloted vehicles like the U.S. MQ-9 Reaper or Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and strikes. Combatants in the Russia-Ukraine war have pioneered the use of first-person view drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic jamming precludes remote control by human operators. But systems that automate reconnaissance and strikes are merely the most visible parts of the automation revolution. The ability to see farther and hit faster dramatically increases the information processing burden on military organizations. This is where decision support systems come in. If automated weapons improve the eyes and arms of a military, decision support systems augment the brain. Cold War-era command-and-control systems anticipated modern decision support systems such as Israel’s AI-enabled Tzayad for battle management. Automation research projects like the U.S.’s Semi-Automatic Ground Environment, or SAGE, in the 1950s produced important innovations in computer memory and interfaces. In the U.S. war in Vietnam, Igloo White gathered intelligence data into a centralized computer for coordinating U.S. airstrikes on North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency’s strategic computing program in the 1980s spurred advances in semiconductors and expert systems. Indeed, defense funding originally enabled the rise of AI. Organizations enable automated warfare Automated weapons and decision support systems rely on complementary organizational innovation. From the Electronic Battlefield of Vietnam to the AirLand Battle doctrine of the late Cold War and later concepts of network-centric warfare, the U.S. military has developed new ideas and organizational concepts. Particularly noteworthy is the emergence of a new style of special operations during the U.S. global war on terrorism. AI-enabled decision support systems became invaluable for finding terrorist operatives, planning raids to kill or capture them, and analyzing intelligence collected in the process. Systems like Maven became essential for this style of counterterrorism. The impressive American way of war on display in Venezuela and Iran is the fruition of decades of trial and error. The U.S. military has honed complex processes for gathering intelligence from many sources, analyzing target systems, evaluating options for attacking them, coordinating joint operations, and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is that countless human personnel everywhere work to keep it running. AI gives rise to important concerns about automation bias, or the tendency for people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was often misled by Vietnamese decoys. A state-of-the-art U.S. Aegis cruiser accidentally shot down an Iranian airliner in 1988. Intelligence mistakes led U.S. stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999. Many Iraqi and Afghan civilians died due to analytical mistakes and cultural biases within the U.S. military. Most recently, evidence suggests that a Tomahawk cruise missile struck a girls school adjacent to an Iranian naval base, killing about 175 people, mostly students. This targeting could have resulted from a U.S. intelligence failure. Automated prediction needs human judgment The successes and failures of decision support systems in war are due more to organizational factors than technology. AI can help organizations improve their efficiency, but AI can also amplify organizational biases. While it may be tempting to blame Lavender for excessive civilian deaths in the Gaza Strip, lax Israeli rules of engagement likely matter more than automation bias. As the name implies, decision support systems support human decision-making; AI does not replace people. Human personnel still play important roles in designing, managing, interpreting, validating, evaluating, repairing, and protecting their systems and data flows. Commanders still command. In economic terms, AI improves prediction, which means generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values, and commitments regarding real-world outcomes, but AI systems intrinsically do not. In my view, this means that increasing military use of AI is actually making humans more important in war, not less. Jon R. Lindsay is an associate professor of cybersecurity and privacy and of international affairs at the Georgia Institute of Technology. This article is republished from The Conversation under a Creative Commons license. Read the original article. View the full article




Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.