Everything posted by ResidentialBusiness
-
Use the 'Production Effect' to Study More Effectively
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding Lifehacker as a preferred source for tech news. I'm going to say something that sounds contradictory at first, though I swear it does make sense eventually: You should always study in silence, but a little noise can be helpful for remembering things. Specifically, your noise can be helpful—that is, when you’re speaking out loud. If you practice the “production effect,” it can help you remember what you’re studying. Here’s how to use it the next time you’re trying to remember something challenging. What is the production effect?The production effect refers to what happens when you use vocalizing as a mnemonic to improve your memory of a new concept. Basically, your memory favors words you read aloud more than the ones you read silently. When you speak out loud, you’re producing something with your material, which is how this gets its name. Research has also shown that the more distinct things you produce, the better you’ll remember whatever you’re saying—so being loud or even singing the new information is more helpful than just reading it out loud. How to capitalize on the production effect when studyingYou have a few options when it comes to trying this out for yourself during a study session. At the most basic level, you can read your notes or textbook out loud to yourself, but in line with the research supporting the value of distinctiveness, I’d recommend taking it further. You can always rely on the Feynman technique, where you teach someone else the material you’re studying, and make sure you’re doing it out loud. I've recommended using ChatGPT to work through the Feyman method before, but if you're trying to tap into the production effect, that's not going to cut it this time. You can practice a few times with AI if you need to or if no one else is around to study with, but you should be going over it at least once out loud with someone else. Try incorporating the production effect into your flashcard use, too. When using the Leitner system, for instance, read your flashcards out loud to yourself. This approach is solid because it doesn't rely on anyone else participating. You don't need anyone else around to capitalize on the value of the production effect and, in fact, it's usually better to study on your own because you avoid distractions or being held back by someone else's schedule or lack of enthusiasm. I’ve already recommended making a “personal podcast” for your studies, too, and that’s helpful here not only because it gives you something to listen to over and over until you grasp it, but because you have to speak the material the first time around, lending the whole exercise to the production effect. This is your two-for-one option and, provided you have the patience to script and read your materials, record them, and listen back to them repeatedly, it's likely your best one. View the full article
-
How different AI engines generate and cite answers
Generative AI is no longer a single thing. Ask, “What is the best generative AI tool for writing PR content?” or “Is keyword targeting as impossible as spinning straw into gold?,” and each engine will take a different route from prompt to answer. For writers, editors, PR pros, and content strategists, those routes matter – every AI system has its own strengths, transparency, and expectations for how to check, edit, and cite what it produces. This article covers the top AI platforms – ChatGPT (OpenAI), Perplexity, Google’s Gemini, DeepSeek, and Claude (Anthropic) – and explains how they: Find and synthesize information. Source and train on data. Use or skip the live web. Handle citation and visibility for content creators. The mechanics behind every AI answer Generative AI engines are built on two core architectures – model-native synthesis and retrieval-augmented generation (RAG). Every platform relies on a different blend of these approaches, which explains why some engines cite sources while others generate text purely from memory. Model-native synthesis The engine generates answers from what’s “in” the model: patterns learned during training (text corpora, books, websites, licensed datasets). This is fast and coherent, but it can hallucinate facts because the model creates text from probabilistic knowledge rather than quoting live sources. Retrieval-augmented generation The engine: Performs a live retrieval step (searching a corpus or the web). Pulls back relevant documents or snippets. Then synthesizes a response grounded in those retrieved items. RAG trades a bit of speed for better traceability and easier citation. Different products sit at different points on this spectrum. The differences explain why some answers come with sources and links while others feel like confident – but unreferenced – explanations. ChatGPT (OpenAI): Model-first, live-web when enabled How it’s built ChatGPT’s family (GPT models) are trained on massive text datasets – public web text, books, licensed material, and human feedback – so the baseline model generates answers from stored patterns. OpenAI documents this model-native process as the core of ChatGPT’s behavior. Live web and plugins By default, ChatGPT answers from its training data and does not continuously crawl the web. However, OpenAI added explicit ways to access live data – plugins and browsing features – that let the model call out to live sources or tools (web search, databases, calculators). When those are enabled, ChatGPT can behave like a RAG system and return answers grounded in current web content. Citations and visibility Without plugins, ChatGPT typically does not supply source links. With retrieval or plugins enabled, it can include citations or source attributions depending on the integration. For writers: expect model-native answers to require fact-checking and sourcing before publication. Perplexity: Designed around live web retrieval and citations How it’s built Perplexity positions itself as an “answer engine” that searches the web in real time and synthesizes concise answers based on retrieved documents. It defaults to retrieval-first behavior: query → live search → synthesize → cite. Live web and citations Perplexity actively uses live web results and frequently displays inline citations to the sources it used. That makes Perplexity attractive for tasks where a traceable link to evidence matters – research briefs, competitive intel, or quick fact-checking. Because it’s retrieving from the web each time, its answers can be more current, and its citations give editors a direct place to verify claims. Caveat for creators Perplexity’s choice of sources follows its own retrieval heuristics. Being cited by Perplexity isn’t the same as ranking well in Google. Still, Perplexity’s visible citations make it easier for writers to copy a draft and then verify each claim against the cited pages before publishing. Dig deeper: How Perplexity ranks content: Research uncovers core ranking factors and systems Google Gemini: Multimodal models tied into Google’s search and knowledge graph How it’s built Gemini (the successor family to earlier Google models) is a multimodal LLM developed by Google/DeepMind. It’s optimized for language, reasoning, and multimodal inputs (text, images, audio). Google has explicitly folded generative capabilities into Search and its AI Overviews to answer complex queries. Live web and integration Because Google controls a live index and the Knowledge Graph, Gemini-powered experiences are commonly integrated directly with live search. In practice, this means Gemini can provide up-to-date answers and often surface links or snippets from indexed pages. The line between “search result” and “AI-generated overview” blurs in Google’s products. Citations and attribution Google’s generative answers typically show source links (or at least point to source pages in the UI). For publishers, this creates both an opportunity (your content can be quoted in an AI overview) and a risk (users may get a summarized answer without clicking through). That makes clear, succinct headings and easily machine-readable factual content valuable. Get the newsletter search marketers rely on. See terms. Anthropic’s Claude: Safety-first models, with selective web search How it’s built Anthropic’s Claude models are trained on large corpora and tuned with safety and helpfulness in mind. Recent Claude models (Claude 3 family) are designed for speed and high-context tasks. Live web Anthropic recently added web search capabilities to Claude, allowing it to access live information when needed. With web search rolling out in 2025, Claude can now operate in two modes – model-native or retrieval-augmented – depending on the query. Privacy and training data Anthropic’s policies around using customer conversations for training have evolved. Creators and enterprises should check current privacy settings for how conversation data is handled (opt-out options vary by account type). This affects whether the producer edits or proprietary facts you feed into Claude could be used to improve the underlying model. DeepSeek: Emerging player with region-specific stacks How it’s built DeepSeek (and similar newer companies) offers LLMs trained on large datasets, often with engineering choices that optimize them for particular hardware stacks or languages. DeepSeek in particular has focused on optimization for non-NVIDIA accelerators and rapid iteration of model families. Their models are primarily trained offline on large corpora, but can be deployed with retrieval layers. Live web and deployments Whether a DeepSeek-powered application uses live web retrieval depends on the integration. Some deployments are pure model-native inference, others add RAG layers that query internal or external corpora. Because DeepSeek is a smaller/younger player compared with Google or OpenAI, integrations vary considerably by customer and region. For content creators Watch for differences in language quality, citation behavior, and regional content priorities. Newer models sometimes emphasize certain languages, domain coverage, or hardware-optimized performance that affects responsiveness for long-context documents. Practical differences that matter to writers and editors Even with similar prompts, AI engines don’t produce the same kind of answers – or carry the same editorial implications. Four factors matter most for writers, editors, and content teams: Recency Engines that pull from the live web – such as Perplexity, Gemini, and Claude with search enabled – surface more current information. Model-native systems like ChatGPT without browsing rely on training data that may lag behind real-world events. If accuracy or freshness is critical, use retrieval-enabled tools or verify every claim against a primary source. Traceability and verification Retrieval-first engines display citations and make it easier to confirm facts. Model-native systems often provide fluent but unsourced text, requiring a manual fact-check. Editors should plan extra review time for any AI-generated draft that lacks visible attribution. Attribution and visibility Some interfaces show inline citations or source lists; others reveal nothing unless users enable plugins. That inconsistency affects how much verification and editing a team must do before publication – and how likely a site is to earn credit when cited by AI platforms. Privacy and training reuse Each provider handles user data differently. Some allow opt-outs from model training. Others retain conversation data by default. Writers should avoid feeding confidential or proprietary material into consumer versions of these tools and use enterprise deployments when available. Applying these differences in your workflow Understanding these differences helps teams design responsible workflows: Match the engine to the task – retrieval tools for research, model-native tools for drafting or style. Keep citation hygiene non-negotiable. Verify before publishing. Treat AI output as a starting point, not a finished product. Understanding AI engines matters for visibility Different AI engines take different routes from prompt to answer. Some rely on stored knowledge, others pull live data, and many now combine both. For writers and content teams, that distinction matters – it shapes how information is retrieved, cited, and ultimately surfaced to audiences. Matching the engine to the task, verifying outputs against primary sources, and layering in human expertise remain non-negotiable. The editorial fundamentals haven’t changed. They’ve simply become more visible in an AI-driven landscape. As Rand Fishkin recently noted, it’s no longer enough to create something people want to read – you have to create something people want to talk about. In a world where AI platforms summarize and synthesize at scale, attention becomes the new distribution engine. For search and marketing professionals, that means visibility depends on more than originality or E-E-A-T. It now includes how clearly your ideas can be retrieved, cited, and shared across human and machine audiences alike. View the full article
-
Wi-Fi World Congress USA will be back in Mountain View this April 13-15. Sign up now!
Last year's WWC USA was a smash hit so we're returning to last year's great venue and central location at the iconic Computer History Museum. The post Wi-Fi World Congress USA will be back in Mountain View this April 13-15. Sign up now! appeared first on Wi-Fi NOW Global. View the full article
-
Nobel Peace Prize winner Maria Corina Machado, Venezuela’s ‘Iron Lady’
Industrial scion remade herself as a grassroots campaigner bent on unseating President Nicolás MaduroView the full article
-
New: Google Business Profile Report Negative Review Extortion Scams
Google has published a new help document on the topic of Google Business Profile negative review extortion scams. The document explains what they are, how to report the scam, what to expect and more importantly, what not to do.View the full article
-
Google Ads Missed Growth Opportunities Tab
Google is testing a new section within the Google Ads advertising console named "Missed Growth Opportunities." This section shows you the "performance you could've achieved in the last year if you'd adjusted your bids and budgets for campaigns with growth opportunities," according to Google.View the full article
-
How Alexis Ohanian’s all-women track and field league is tapping into a growing demand for women’s sports
Megan Rapinoe, Caitlin Clark, Serena Williams, Mia Hamm, Lindsey Vonn—the list of high-profile, recognizable women athletes is growing. And track and field athletes may be the next to become household names. That’s the bet that Alexis Ohanian is making with Athlos, an all-women’s track and field league, which is hosting its second event in New York this week. Ohanian is perhaps best known as the cofounder of Reddit, but he’s also an investor who’s made no secret of his interest in investing in sports. During a press event in New York City this week, he said the idea for Athlos came to him while watching the Olympics, during which millions of people tune in to watch track and field events. His logic: Why not try to tap into that audience outside of larger events? That’s how Athlos was born. The league is an attempt to capture the excitement around track and field events that has, traditionally, only existed around the Olympics and other big-time track and field events. Last year, during its inaugural event, Ohanian says it drew 3 million viewers, attracted big-name advertisers like Toyota and Tiffany & Co., and was more successful than anticipated. This year, he expects it to be even bigger, and hopes that some of the athletes taking part will start to become recognizable to casual sports fans—perhaps on a level that even matches what Clark has achieved in basketball. “Americans are going to be paying attention” When the Olympics or World Championship track and field events come around, “Americans are going to be paying attention,” Ohanian says. “[We] have a legacy of American excellence in the sport, especially among our women. No one doubted for a second that women’s soccer, women’s basketball, was a tier-one opportunity.” Ohanian was joined at the press conference by a lineup of Olympic Gold Medalists, world champions, and recordholders, including Masai Russell, Alexis Holmes, Grace Stark, Keely Hodgkinson, Faith Kipyegon, and Georgia Hunter Bell. Those athletes and others will compete this week for a top prize of $60,000. And Athlos is also giving some of them the chance to focus entirely on their chosen events, something that, until very recently, only the top sliver of athletes have been able to do. British mid-distance runner Georgia Hunter Bell, for instance, said that she only recently was able to quit her full-time job in tech sales to focus on track and field—and she’s an Olympic medalist who has set national records. “It was hard training around a full-time, corporate job and trying to train like a professional athlete,” she said. But leagues like Athlos are creating pathways to the pros for athletes like her. Globally, revenue from women’s sports doubled from 2023 to 2024 and was expected to exceed $2.3 billion this year, according to a March 2025 report from Deloitte. Basketball and soccer remain the sector’s biggest moneymakers. Athlos is only the most recent attempt at creating a professional-level league for female athletes. Several others are already gaining traction, including the WNBA, the National Women’s Soccer League (NWSL), and the Professional Women’s Hockey League (PWHL). The demand is there, Ohanian argues, and there’s a big business opportunity for brands, advertisers, athletes, and others to get in on it—though he thinks it’ll take some time. “I think this is just the start. We’re not at F1 (Formula 1) size yet,” he said. “But I want people to take for granted that this sport—which, again, is the most popular sport during the Olympics—can have the same-size platform outside of that, and bring together athletes, builders, CEOs, and investors who can keep driving it.” View the full article
-
GA4 Surge In Organic Search Traffic But Search Console Flat
There are a nice number of reports of GA4, Google Analytics, showing a huge surge in organic search traffic over the past couple of days, but Google Search Console is not showing any increase, it remains flat. Some are suspecting this is the result of fake bot traffic not being filtered by GA4, but others are not sure.View the full article
-
Tesla’s ‘Robotaxi’ service is no Waymo
Greetings, salutations, and my thanks—as always—for reading Fast Company’s Plugged In. Me: “Are you here just to monitor for safety?” Guy sitting in the driver’s seat of the Tesla Model Y I’m riding in: “I can’t specify.” That was the extent of the conversation during a recent trip I took using Tesla’s Robotaxi service. I was curious why the car that picked me up had a human in it: After all, Tesla bills its service as “the future of autonomy,” and the car did, in fact, drive itself for the entirety of my 4.5-mile journey. But I didn’t get any answers from this guy—or Tesla the corporate entity, which prides itself on ignoring the media and didn’t reply to my emailed questions. This we do know. After years of anticipation, Tesla rolled out Robotaxis in Austin in June. A month later, they arrived here in the San Francisco Bay Area. For now, however, it falls far short of a full-blown deployment of a small-r robotaxi service. (The term is often used generically for self-driving car services as well as being a Tesla trademark.) For one thing, I spent time on a waitlist before being allowed to start hailing cars. For another, Tesla reportedly launched Robotaxi in the Bay Area without securing the necessary approval to operate a fleet of autonomous vehicles in California. Its Robotaxis are autonomous in the same sense that any Tesla car with the misleadingly named “full self driving” package is autonomous. Hence the need for the guy in the driver’s seat—a safety monitor who could take over in an emergency. It’s a little like getting a lift from a friend who owns a Tesla, who would also be required to stay behind the wheel. In Austin, where Tesla is further along in the regulatory process, the company initially put the safety monitors in the front passenger seat, but has reportedly moved them to the driver’s seat as well. All of this is fundamentally different from Waymo, which began as Google’s self-driving car project and—after an extraordinarily long, hard, expensive slog—is up and running in Austin, San Francisco, and three other U.S. metropolitan areas. Waymos have no safety monitors; it’s just you, any friends or relatives you’ve brought along, and the car. The service is a triumph not only of technology but also the patient labor required to convince relevant officials that permitting unsupervised cars to drive themselves around a city isn’t a deadly mistake, and might even be safer than letting humans behind the wheel. In its present incarnation, Tesla’s Robotaxi service also bears scant resemblance to the one Elon Musk talked up at an event in April 2019. Back then, the company was going to have a million such vehicles on the road—not Tesla-owned ones, primarily, but privately owned cars that could hit the road and ferry passengers when their owners didn’t need them. Oh, and Musk said that would happen by the end of 2020. During this presentation, Musk did cheerfully acknowledge his tendency to blow deadlines. But a half-decade later, his 2019 plan has barely inched toward reality. Even his comparatively modest recent prediction of Tesla autonomous ride-hailing being available to half the U.S. population (“subject to regulatory approvals”) by the end of 2025 is not going to happen. Meanwhile, I don’t intend to pay much attention to the Cybercab—a self-driving two-seater Tesla with no driver’s seat, allegedly due next year—until I’m riding in one. Judged on their own merits as a mode of transportation, I’ve found my first Tesla Robotaxi trips uneven at best. Unexpectedly, the best part has been their aggressively low pricing. Tesla has abandoned its original per-trip flat rate of $4.20 (ha ha, Elon!). But for my journeys, Robotaxi always beat Waymo and Uber pricing, sometimes by a lot. The first trip I took—returning home from a nearby mall—cost a mere $1.92. More pluses: Tesla’s Robotaxi fleet of Model Y cars is more reliably clean and cushy than Uber or Lyft. The service covers a far broader swath of the Bay Area than Waymo. And unlike Waymos in their present form, Robotaxis can take the highway—presumably because they’re not officially autonomous vehicles—making longer trips practical. Once one of the best-regarded car companies in the world, Tesla is now dogged with a reputation for safety problems, especially when it comes to self-driving: When I announced I’d been in a Robotaxi, one friend told me I shouldn’t put my life at risk like that. Grim humor aside, all the trips I’ve taken felt safe and involved efficient routes. (I should note that in one case, the safety monitor did have his hands on the wheel a fair amount of the time—whether out of necessity or force of habit, I can’t say.) Still, moving past the question of autonomy for a moment, I didn’t find Robotaxis ready to replace ride-hailing services even in their traditional, human-driven form. How many taxis Tesla has in the Bay Area is a mystery, but it’s clearly not enough: When I checked, pickup wait times usually were in the neighborhood of 13 to 15 minutes and got as high as 29 minutes. Worse, the Robotaxi app often said no cars were available at all, and told me to try again later. The Robotaxi software also lacks Waymo’s polish. When I leaned forward to play with the back-seat touchscreen, I kept getting a message telling me to put my (still buckled) seat belt back on or the car would pull over. That screen appeared to offer an array of entertainment options, but when I tapped on Netflix and Disney+, all I got were sign-up promos. Then there’s the service’s iPhone app, which has a rickroll-style gag involving tipping that I didn’t find funny the first time and don’t relish encountering on any future trips. Most of all, the need for the safety monitor eviscerates any possibility of Robotaxi rivaling the experience Waymo is already delivering. Once you get over the coolness factor, much of Waymo’s value proposition is the aura of comfy isolation. You can take calls, bang away at your laptop, or maybe even take a short nap without feeling like privacy is an issue or you’re being rude to the driver. It’s the closest thing to an office on wheels I’ve ever encountered. The Robotaxi safety monitors may be a vestigial concession to regulatory reality, but they eliminate the magic of autonomy. When—if?—Tesla is able to safely ditch them, it will make the service a true Waymo competitor. For now, the monitors I encountered were perfectly polite but also resistant to my efforts to chat them up. Here’s another dialogue I had with one: Me: “So the car is driving itself?” Safety monitor: “Yeah.” Surveys show that many people who haven’t been in a self-driving car remain deeply skeptical of the whole idea. I can’t help but think that Tesla is missing an opportunity by not turning the monitors into ambassadors. Rather than deflecting inquiries, maybe they should be volunteering information, as if they were welcoming tour guides to the autonomous future Musk has promised but not yet fully delivered. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on FastCompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company What’s it really like to use a folding phone? Life with the Google Pixel 10 Pro Fold A gimmicky category is growing up. And skepticism be damned, it does feel like a taste of the future. Read More → Meta’s next big bet for VR: Your living room The company’s Hyperscape VR app will have the ability to create digital replicas of your surroundings. Photorealistic avatars may soon follow. Read More → What can the rise and fall—and rebound—of NFTs teach us about the AI bubble? After the market for NFTs collapsed in 2022, the tech quickly fell out of the mainstream. It never went away, though. Experts now tell Fast Company what went wrong and what’s next. Read More → The flailing The President Media is part of the Russell 3000 index. These states want to know why The company’s poor performance raises questions about its inclusion. Read More → This massive new data center is powered by used EV batteries A new project from battery recycling startup Redwood Materials and data center builder Crusoe shows that it’s possible to build data centers cheaper and faster while also slashing emissions. Read More → ChatGPT wants to be the new operating system. Here’s why that should worry us Compute has become the new oil, and OpenAI just secured drilling rights. Read More → View the full article
-
Google Voice Search Now Powered By Speech-to-Retrieval (S2R)
Google has updated its Voice Search models to be powered by Speech-to-Retrieval (S2R). Google said this allows it to "gets answers straight from your spoken query without having to convert it to text first, resulting in a faster, more reliable search for everyone."View the full article
-
Microsoft Ads Posts On How To Optimize For AI Search Answers
The Microsoft Advertising blog posted about how to optimize for AI Search Answers. I thought it was weird to see this on the Microsoft Advertising blog and not on the Bing Search blog, because (a) it is an ad blog and (b) it was written by Krishna Madhavan, who is part of the Bing team, not ad team.View the full article
-
Your crawl budget is costing you revenue in the AI search era by Semrush Enterprise
While online discussion obsesses over whether ChatGPT spells the end of Google, websites are losing revenue from a far more real and immediate problem: some of their most valuable pages are invisible to the systems that matter. Because while the bots have changed, the game hasn’t. Your website content needs to be crawlable. Between May 2024 and May 2025, AI crawler traffic surged by 96%, with GPTBot’s share jumping from 5% to 30%. But this growth isn’t replacing traditional search traffic. Semrush’s analysis of 260 billion rows of clickstream data showed that people who start using ChatGPT maintain their Google search habits. They’re not switching; they’re expanding. This means enterprise sites need to satisfy both traditional crawlers and AI systems, while maintaining the same crawl budget they had before. The dilemma: Crawl volume vs. revenue impact Many companies get crawlability wrong due focusing on what we can easily measure (total pages crawled) rather than what actually drives revenue (which pages get crawled). When Cloudflare analyzed AI crawler behavior, they discovered a troubling inefficiency. For example, for every visitor Anthropic’s Claude refers back to websites, ClaudeBot crawls tens of thousands of pages. This unbalanced crawl-to-referral ratio reveals a fundamental asymmetry of modern search: massive consumption, minimal traffic return. That’s why it’s imperative for crawl budgets to be effectively directed towards your most valuable pages. In many cases, the problem isn’t about having too many pages. It’s about the wrong pages consuming your crawl budget. The PAVE framework: Prioritizing for revenue The PAVE framework helps manage crawlability across both search channels. It offers four dimensions that determine whether a page deserves crawl budget: P – Potential: Does this page have realistic ranking or referral potential? Not all pages should be crawled. If a page isn’t conversion-optimized, provides thin content, or has minimal ranking potential, you’re wasting crawl budget that could go to value-generating pages. A – Authority: The markers are familiar for Google, but as shown in Semrush Enterprise’s AI Visibility Index, if your content lacks sufficient authority signals – like clear E-E-A-T, domain credibility – AI bots will also skip it. V – Value: How much unique, synthesizable information exists per crawl request? Pages requiring JavaScript rendering take 9x longer to crawl than static HTML. And remember: JavaScript is also skipped by AI crawlers. E – Evolution: How often does this page change in meaningful ways? Crawl demand increases for pages that update frequently with valuable content. Static pages get deprioritized automatically. Server-side rendering is a revenue multiplier JavaScript-heavy sites are paying a 9x rendering tax on their crawl budget in Google. And most AI crawlers don’t execute JavaScript. They grab raw HTML and move on. If you’re relying on client-side rendering (CSR), where content assembles in the browser after JavaScript runs, you’re hurting your crawl budget. Server-side rendering (SSR) flips the equation entirely. With SSR, your web server pre-builds the full HTML before sending it to browsers or bots. No JavaScript execution needed to access main content. The bot gets needed in the first request. Product names, pricing, and descriptions are all immediately visible and indexable. But here’s where SSR becomes a true revenue multiplier: this added speed doesn’t just help bots, but also dramatically improves conversion rates. Deloitte’s analysis with Google found that a mere 0.1 second improvement in mobile load time drives: 8.4% increase in retail conversions 10.1% increase in travel conversions 9.2% increase in average order value for retail SSR makes pages load faster for users and bots because the server does the heavy lifting once, then serves the pre-rendered result to everyone. No redundant client-side processing. No JavaScript execution delays. Just fast, crawlable, convertible pages. For enterprise sites with millions of pages, SSR might be a key factor in whether bots and users actually see – and convert on – your highest-value content. The disconnected data gap Many businesses are flying blind due to disconnected data. Crawl logs live in one system. Your SEO rank tracking lives in another. Your AI search monitoring in a third. This makes it nearly impossible to definitively answer the question: “Which crawl issues are costing us revenue right now?” This fragmentation creates a compounding cost of making decisions without complete information. Every day you operate with siloed data, you risk optimizing for the wrong priorities. The businesses that solve crawlability and manage their site health at scale don’t just collect more data. They unify crawl intelligence with search performance data to create a complete picture. When teams can segment crawl data by business units, compare pre- and post-deployment performance side-by-side, and correlate crawl health with actual search visibility, you transform crawl budget from a technical mystery into a strategic lever. 3 immediate actions to protect revenue 1. Conduct a crawl audit using the PAVE framework Use Google Search Console’s Crawl Stats report alongside log file analysis to identify which URLs consume the most crawl budget. But here’s where most enterprises hit a wall: Google Search Console wasn’t built for complex, multi-regional sites with millions of pages. This is where scalable site health management becomes critical. Global teams need the ability to segment crawl data by regions, product lines, or languages to see exactly which parts of your website are burning budget instead of pushing conversions. Precision segmentation capabilities that Semrush Enterprise’s Site Intelligence enables. Once you have an overview, apply the PAVE framework: if a page scores low on all four dimensions, consider blocking it from crawls or consolidating it with other content. Focused optimization via improving internal linking, fixing page depth issues, and updating sitemaps to include only indexable URLs can also yield huge dividends. 2. Implement continuous monitoring, not periodic audits Most businesses conduct quarterly or annual audits, taking a snapshot in time and calling it a day. But crawl budget and wider site health problems don’t wait for your audit schedule. A deployment on Tuesday can silently leave key pages invisible on Wednesday, and you won’t discover it until your next review. After weeks of revenue loss. The solution is implementing monitoring that catches issues before they compound. When you can align audits with deployments, track your site historically, and compare releases or environments side-by-side, you move from reactive fire drills into a proactive revenue protection system. 3. Systematically build your AI authority AI search operates in stages. When users research general topics (“best waterproof hiking boots”), AI synthesizes from review sites and comparison content. But when users investigate specific brands or products (“are Salomon X Ultra waterproof, and how much do they cost?”) AI shifts its research approach entirely. Your official website becomes the primary source. This is the authority game, and most enterprises are losing it by neglecting their foundational information architecture. Here’s a quick checklist: Ensure your product descriptions are factual, comprehensive, and ungated (no JavaScript-heavy content) Clearly state vital information like pricing in static HTML Use structured data markup for technical specifications Add feature comparisons to your domain, don’t rely on third-party sites Visibility is profitability Your crawl budget problem is really a revenue recognition problem disguised as a technical issue. Every day that high-value pages are invisible is a day of lost competitive positioning, missed conversions, and compounding revenue loss. With search crawler traffic surging, and ChatGPT now reporting over 700 million daily users, the stakes have never been higher. The winners won’t be those with the most pages or the most sophisticated content, but those who optimize site health so bots reach their highest-value pages first.For enterprises managing millions of pages across multiple regions, consider how unified crawl intelligence—combining deep crawl data with search performance metrics—can transform your site health management from a technical headache into a revenue protection system. Learn more about Site Intelligence by Semrush Enterprise. View the full article
-
How to make sense of Meta’s growing AI-powered advertising machine
This past June, Meta set off a bomb in the marketing world when it announced that it would fully automate the advertising on its platforms by 2026. People in advertising wondered: Is this the end of ad agencies as we know it? Has the AI “slopification” of social media finally been fully realized? The hyperbolic reaction is understandable—maybe even justified. With 3.43 billion unique active users across its platforms around the world, and an advertising machine that brought in $47.5 billion in Q2 sales alone (up 22% over last year), Meta is an accurate bellwether for where the ad business is heading. Meta has been working for years to build a machine that is already pretty damn close to automating its entire ad system, from creative concept generation to determining whose eyeballs see the final product. Its current capabilities are good enough to give most advertising creatives the flop sweats. But now is not the time for marketers to cower in fear. The opposite, actually. This is a great moment for marketers face head-on how Meta views its relationship with creatives, agencies, and brands, as it continues to roll out new technologies and features. To help, we asked Meta ad execs to break down their strategy. Below is a detailed explainer to help you understand how Meta is thinking about its role in the advertising space, and what brands, agencies, and even consumers can do to better prepare themselves for what’s to come. In this premium piece, you’ll learn: What Meta’s new AI advertising tools are and how they work, straight from the people creating them The reason why agencies will always be a part of Meta’s advertising equation Which tools are turbocharging growth for marketers, according to Helen Ma, Meta’s VP of product management (GenAI ad formats, video growth, creative diversification) Five key breakthroughs Earlier this month, Meta announced a slew of features to its AI-powered ad platform, including virtual try-on tech, AI-generated video for advertisers, and generative CTA (call to action) stickers to replace the common “Buy Now” button. But to understand the significance of the new tools, it’s important to step back for a moment and dig into the technology infrastructure that powers Meta’s advertising system. Over the past few years, Meta has systematically rebuilt its entire ad infrastructure around AI. Each innovation builds on previous advances, creating compounding improvements in the effectiveness of ads on its platforms. (Take haircare brand Living Proof, for example, which saw an 18% boost in purchases after using Meta’s generative AI feature for ad creative, compared to using its usual campaign strategy and creative.) This all-in-one-place approach to marketing tools reduces the operational burden for advertisers while increasing their dependence on Meta’s systems. The goal for Meta is to be as embedded as possible in a brand’s overall marketing operation. Matt Steiner, Meta’s VP of monetization infrastructure for ranking and AI foundations, says there are essentially five key technological breakthroughs that underpin Meta’s AI advertising platforms. The focus is on automating and optimizing every part of the advertising process, from creative generation to targeting and performance measurement. Here’s what you need to know: Advantage Plus Shopping Campaigns: The Automated Ad Manager Instead of advertisers needing large teams to constantly monitor their ads, analyze spreadsheets, and manually decide when to increase or decrease spending, Meta introduced Advantage Plus in 2022 to use machine learning models to do the heavy lifting. The AI constantly monitors which campaigns and audiences are performing well and automatically redirects the budget and changes the bid strategy 24 hours a day to maximize results. “I think the key innovation that drives it is that machine learning models don’t get tired,” Steiner says. He notes that this technology was key to Meta’s ad business when Apple introduced anti-tracking changes for iPhone users. Historically, Meta could track whether ads you saw on its platforms ultimately led to a purchase elsewhere, and this anti-tracking change cut off its lifeline to that information. Meta bypassed the blockage using transfer learning and combining its app data with advertisers’ own data on the people using its sites and making purchases. Meta Lattice: The “Shared Knowledge” System This is a deep machine learning technology that allows different AI models to learn from each other. Traditionally, Meta had separate AI models predicting different user behaviors. For example, one model would predict who will click on an ad, while another would predict who would actually buy the product. Announced in 2023, Lattice utilizes transfer learning, which allows these models to share knowledge. Transfer learning is a concept where a machine learning model trained to do one task can be trained to do a second task, and its performance on the first task improves by being trained to do the second task. Generative AI for Ads Creative: The Automatic Ad Designer This set of tools, originally introduced in May 2024, automatically creates variations of a brand’s ad content across text and image backgrounds as well as entirely new images from scratch. It then optimizes them to look good and perform well on Meta platforms. This saves advertisers time by allowing them to test and learn what consumers respond to much faster than a human team could. “Humans are best at coming up with novel ideas,” says Steiner. “They’re not really good at thinking of all variations of the word buy or sale, and that’s not something people are really excited to do. So with machine learning models to automate that, they can spend their time doing the things that are uniquely human-skilled, like coming up with new ideas and really understanding why a campaign will resonate with people—things that are not really automatable today.” Andromeda: The High-Speed Ad Finder The goal of all Meta advertising is to match the right brand’s ad to the right person at the time that person is most likely to click (and buy). Thanks to the new AI ad tools rolled out over the past year, the number of ads available in Meta’s system increased rapidly. Within a month of launching its first AI tools in 2024, more than a million advertisers used Meta’s generative AI tools to create more than 15 million ads. This essentially clogged the system and made it harder and slower for Meta to search through all those ads to find the few that might be relevant to any particular user. In December Meta introduced Andromeda, a massive technical and hardware upgrade to Meta’s backend infrastructure that lends it up to 10,000 times more computing power. Codesigned with Meta Training and Inference Accelerator (MTIA) and Nvidia’s Grace Hopper Superchip, Andromeda allows Meta’s system to handle the massive increase in demand for computing power from all the ads being created using its generative AI tools. Steiner says the result has been a dramatic improvement in the selection of relevant ads, increasing the likelihood of people finding a useful ad and ultimately driving up conversions for advertisers. According to the company, so far it has boosted conversions on Facebook mobile Feed and Reels by 4%. Generative Ads Recommendation Model (GEM): The Customer Map Introduced in April, GEM is new AI model architecture for deciding which ad to show you, based on predicting future behavior. Just as an LLM uses sequence learning to predict the next logical item in a sequence, this does the same thing for ads. Instead of just predicting whether you’ll click on the next single ad, GEM tracks your entire history of ad interactions and purchases. This allows the model to recognize that you might be on several different, parallel “purchase journeys” simultaneously and react accordingly. The company says these improvements increased ad conversions by approximately 5% on Instagram and 3% on Facebook Feed and Reels in Q2. Ad feeds of the future This new backbone technology is powering all the ads you see, but it’s all more or less invisible. Here’s what Meta is betting on to get you buying more across Instagram, Facebook, and WhatsApp: Virtual try-on: This is exactly what it sounds like. Meta is now testing with select advertisers the ability to see how clothing featured in an ad looks on them after they upload a photo of themselves. AI Sticker CTA (call to action): Most of the time you see an ad on Instagram, there’s a generic “Shop Now” button at the bottom. Now brands are going to be using custom AI-generated stickers that could be a product photo or a logo graphic to add a bit more flair. “We’re seeing something like 50% to 200% higher click-through rates on these AI-generated CTA stickers, because they’re fun and visually appealing and bring the product to life,” says Helen Ma, Meta’s VP of product management. Previously announced at Cannes, this visual enhancement is now available to more advertisers globally for Facebook Stories and testing for Facebook Reels, as well as Instagram Stories and Reels. Creative generation upgrades: Meta rolled out two notable updates to its generative AI tool kit. First is an AI-generated music feature that understands the content of an ad and produces unique, custom music that reflects the product, style, and sentiment a brand wants to convey. It will also feature AI dubbing for international or multilingual audiences. The other is what Meta calls “persona-based image generation,” to help advertisers further personalize ads to different customers. This is like an AI vibes tool, changing the vibe of an ad to fit specific audiences. If you’re selling headphones, it can create one image that focuses on style for a fashion angle, one to highlight sound quality for audiophiles, and another that emphasizes comfort for travelers, all from the same product image. Facebook creator discovery API: This makes it easier for brands to find creators on Facebook by allowing them and third-party partners (like agencies) to search for creators using keywords. It also helps agencies and brands explore creator insights like audience demographics and average engagement rate to find the best match. Meta AI assistant-informed ads: The Meta AI digital assistant has more than 1 billion active monthly users and is available as a stand-alone app, as well as across its apps like Facebook, Instagram, WhatsApp, and Messenger. Starting December 16, the company will utilize users’ interactions with the AI assistant to inform ads and other content it shows them. Krassimir Karamfilov, Meta Advertising’s VP of product, says that the number of Meta platforms, combined with billions of users, makes it impossible for individual marketers to get the most out of their ads without the help of tools like this. “It’s just impossible to manually test all the potential variants, so this is why AI is just making it easier to experiment efficiently and then home in on what works,” Karamfilov says. He knows that some advertisers have expressed concerns over a lack of control, but he counters that ads perform better when they’re not limited to the brand’s initial parameters. “We see a lot of suboptimal usage of our products,” he says. “What we’re doing is all about aligning our systems to the way the advertisers measure value.” Enter the AI Concierge Meta isn’t stopping at the ads in your feed. It sees a bigger business opportunity in helping brands—especially small and midsize businesses—utilize AI agents in their own business operations like customer service. Earlier this month, the company launched Business AI, which acts as a sales concierge to help take a consumer from an ad in a Meta feed all the way to purchasing a product. It acts as a personalized AI agent on Meta ads, messaging threads inside Meta platforms, and can even extend to the brand’s own websites. Clara Shih, VP of Business AI, says Meta’s clients were asking for help beyond the advertising side. “Our customers have said, ‘We want AI to not only help with product discovery and generating leads, help us all the way to closure, help us with our business operations, help us with customer support questions,” Shih says. A recent MIT study reported that 95% of enterprise generative AI pilots fail to deliver measurable business impact, despite a collective billions invested. Shih says Business AI takes the burden of infrastructure off of companies in order to feel that impact. “It’s just very hard, and a lot of companies don’t have big machine learning and AI teams where they can piece all these things together,” she says. “So something else that’s been really important to us is creating something that’s easy to set up and maintain.” The benefit to brands is that a Meta-powered AI chatbot doesn’t have to learn about a brand from scratch because so many businesses and brands have been active on Meta for years. Shih says all of their past ads and social posts are a gold mine of tacit knowledge about a businesses within the Meta universe, giving the Business AI chatbot a lot of information to work with from the start. “They don’t have to hire a consultant and pay millions of dollars to set up their chatbot. We could just look at what they’ve said and what they’ve done and what their brand is all about,” she says. “And just by using LLMs to mine that information, we’ve been able to create the world’s first turnkey business agent that just works because it’s them. It’s all based on what they’ve done.” It’s all very fun and convenient in the short term. And Meta’s recent earnings prove that it’s working. Second-quarter ad sales hit more than $47 billion, a 22% increase year over year. Every executive I spoke to emphasized that these are tools for humans to use, and that the company’s relationship with agencies and creatives are crucial to any of this working—at least for now. Just keep in mind that the end game here is still full automation. As Shih put it, “Mark [Zuckerberg] has talked about how in the future, the dream state for a business is to come to Meta, share their product catalog, share their business outcomes, and then we can automate the rest through a combination of Advantage Plus AI features, all as a business agent,” she says. “And we are getting closer and closer to that.” View the full article
-
First Brands Group: dude, where’s my cash?
Cleveland, we have a problemView the full article
-
Pennymac, Loandepot, Chase make leadership moves
Onity adds former Meta exec as director, Click n' Close taps industry veteran as president while banks and credit unions boost their mortgage teams. View the full article
-
Is it okay to hug coworkers?
Maybe you’re meeting a coworker you’ve only known on Zoom in person for the first time. Maybe you’re greeting a group of coworkers at a conference, or saying goodbye after a team happy hour. Maybe a coworker has experienced a sudden loss. Or maybe you’re simply more of a hug person than a handshake person. Is embracing a colleague a faux pas—or worse? Cultural moments like the #MeToo movement, as well as the hands-off norms established during the pandemic, have shaped opinions about when it’s okay to touch someone else. Although most people don’t greet their office mates with literal open arms each day, colleagues who’ve developed close bonds may feel inclined to hug from time to time. That’s probably not surprising, given the important role friendships play in the workplace. And with the rise of flexible work in the knowledge economy, where people are encountering colleagues in real life less frequently, interacting with them face-to-face feels more like an event that warrants something more than a firm business handshake. But is it okay to hug your coworkers? Even if they’re friends? It’s a touchy subject, depending on whom you ask—perhaps the touchiest. To hug or not hug Cameron Herold is decidedly pro-hug. The founder of the COO Alliance (a coaching practice for chief operating officers) and former COO of 1-800-GOT-JUNK says he started his career in the straightlaced, suit-and-tie, handshakes-only late ’80s. A trip to Burning Man in 2007 changed his ways. “Everybody was hugging. It pulled me way out of my comfort zone,” he says. Since that trip, he greets just about everyone with an embrace, including former Sprint CEO Marcelo Claure upon meeting him roughly eight years ago. “I’m 6-foot-3 and he’s 6-foot-7. I went in for an over-the-shoulder hug. He laughed, said that was a first ever,” he recalls. That hug led to two lucrative coaching engagements with Claure and his then-chief operating officer, Herold recalls. Other workplace experts are more measured about coworkers and touch. Author and CEO coach Kim Scott, who’s coached at places like Dropbox and Twitter, says if you want to touch someone, it’s your job to find out first whether they’re comfortable being touched. “If they’re not comfortable, don’t touch,” she says, adding, “And if you’re not sure, don’t touch.” Ignoring that advice could land you in hot water, says Virginia-based employment attorney Leah Stiegler. She says that legally, there’s a “two-pronged element” to deem an action offensive: The action must be considered offensive by a reasonable third person, in addition to the person touched feeling offended. But if an employee files a lawsuit alleging a hostile workplace or sexual harassment, the company will still have a costly defense on its hands, even if the case is thrown out, Stiegler says. Plus, even if the hug isn’t something that would rise to the level of a hostile workplace lawsuit, an internal report to human resources about unwanted touching could cause its own problems for the hugger. When communications adviser Elizabeth Rosenberg worked at a previous employer, part of the company’s (pre-pandemic) annual holiday gathering was having all 300 employees walk the “line of leadership,” where company brass gave each a thank-you note and their bonus. Leaders also gave team members a hug, fist bump, or no touch, depending on the employee’s preference. Rosenberg says it was up to the team member to voice their preference. “I feel like we have got to be better about taking responsibility for our own feelings,” she says. “If somebody is uncomfortable being hugged, I think they need to step up and say, ‘I’m not comfortable with hugging.’” Before you reach, read the room What makes some people open to embracing coworkers while others would rather chew glass? (As one Reddit poster put it: Since when are people giving hugs at work?) Even outside the office, “hug or no hug” has been a long-standing cultural debate. Social media debates and news articles capture the tension: In general, is hugging creepy? Sweet? Invasive? Affirming? Inappropriate? Appropriate? We all have different backgrounds and preferences that can affect how we feel about being hugged or touched, says workplace mental health expert and licensed clinical social worker Christina Muller. “Some people call it a ‘love language,’” she says. An individual’s “language” of building connection may be physical touch, like hugging. Others may prefer time spent together, praise, or acts of service, for example. For those people, “getting a cup of coffee or just letting them vent could be a [preferred] source of comfort and connection,” she says. Even with the risks, Stiegler stops short of recommending a blanket “no hugging/no touching” policy, which she says would be difficult to enforce. Instead, establish strong anti-harassment policies and train employees about the behavior expected in the workplace, including consent. One solution is to provide an easy way to dodge an unwanted hug, says author Ben Swire, founder of team-building company Make Believe Works. “The offer of a hug should always come with a graceful way out. It shouldn’t feel like, Ooh, if I say no, it’s going to feel weird and icky,” he says. You can literally just ask something along the lines of, “Can I hug you?” Swire says quickly asking something like, “Are you a hug person, a fist-bump person, or a wave person?” gives people an easy way to choose their comfort level. Be aware of situational nuances as well, Scott says. In addition to preference and consent, consider whether there are gender or power dynamics in the relationship that could make touch inappropriate. If so, refrain from hugging, she advises. (And if alcohol is involved, it’s a very good idea to stay hands-off, she adds.) When in doubt, Steigler says, just read the room: “If they’re wearing a COVID mask in 2025, don’t hug them.” View the full article
-
A new London sculpture depicts the complicated beauty of postpartum women
Hours after Princess Diana gave birth, she walked out onto the steps of the Lindo Wing, the private maternity ward of St. Mary’s Hospital in London, where she was met with photographers from around the world. As she introduced Prince William, then a couple years later, Prince Harry, she looked radiant, with flawless makeup and flowing gowns. It was a portrait of maternal serenity. It’s a beautiful image, one that captures many magical aspects the hours after giving birth. But it is far from the full picture for the roughly 140 million women who enter postpartum every year. It likely did not even capture what Diana herself was feeling on those steps. “There’s a duality in those moments,” says Chelsea Hirschhorn, founder and CEO of Frida, a company that makes products for postpartum mothers and newborns. “You’re proud of what you’ve just accomplished and excited to enter this next chapter of life. But you’re exhausted, broken, hurting, and in pain.” Today, at the Lindo Wing, Frida unveils a sculpture it has commissioned by the British artist Rayvenn Shaleigha D’Clark that portrays a postpartum woman. Hirschhorn’s goal was to capture the complexity of women’s experience after giving birth, complicating the sanitized image the world has come to expect in this setting. The seven-foot-tall monument, entitled “Mother Vérité” (French for “truth”) is based on 3D scans of eight women from diverse backgrounds, and aims to realistically capture scars, swelling, and curves. The statue will travel around Europe and the United States over the next months, ending up at Art Basel, Miami in December. Over the last few years, Frida has pivoted from a brand that creates products for newborns (like snot suckers for their stuffed noses) to solutions for postpartum women (like kits that reduce swelling when mammary glands get infected). Hirschhorn believes that many companies avoid tackling postpartum problems because they seem so taboo and unglamorous. So she’s been on a mission to help demystify the experience by starting cultural conversations about it. In 2019, Hirschhorn wrote an open letter to Meghan Markle in the New York Times, asking her to skip the traditional postpartum photo, delineating all the painful experiences women face after giving birth. In the end, after the birth of her firstborn son Archie, she did do a photo, but notably wore a dress that revealed her protruding postpartum belly, a move that Vanity Fair called subtly radical. Hirschhorn was eager to continue the conversation, and it occurred to her that a public monument of a postpartum woman could be a way to tell this story. Only 4% of statues in London depict women, according to a study by the organization Art U.K. Meanwhile, 8% depict animals, while 79% depict men. “You can only honor what you can see,” says Hirschhorn. “How can we value the work that women, and mothers do, if it is truly invisible to society?” Chelsea HirschhornRayvenn Shaleigha D’Clark Hirschhorn commissioned the monument from D’Clark, a well-known digital sculptor whose work has been shown at the Victoria & Albert Museum and the Saatchi Gallery. D’Clark used 3D scanning machines to capture the bodies of eight women at different stages of their postpartum journey. The final sculpture, which took roughly two months to create, portrays a woman cradling her newborn and wearing nothing but postpartum underwear. Her belly is covered in stretch marks and bulges. D’Clark chose to make the sculpture out of bronze, which accentuates creases and folds on the skin. “Some of my favorite details of the piece [are] the linea nigra, messy bun, and the texture of Frida’s postpartum pants, which became an iconic marker of this collaboration,” D’Clark says. Hirschhorn was particularly moved by the stance that the woman in the statue is taking, with one hand on her hip. “There’s a coexistence of strength and fragility,” she says. “Her fingers are facing forward in a position of confidence and surety, or perhaps exhaustion.” Mother Vérité now stands at the steps of the Lindo Wing in the same spot where Princess Diana stood, reflecting another aspect of the postpartum experience. “We’re not denigrating the experience of announcing a birth in this way,” Hirschhorn clarifies. “But we’re juxtaposing it with a slightly more authentic and realistic portrayal of a woman’s physical transition into motherhood.” For D’Clark, it’s important that the statue is displayed publicly, alongside the many statues that grace London. “Public project and powerful storytelling are vital to visualizing overlooked narratives and building empathy in our cities,” she says. View the full article
-
Long John Silver’s got rid of its fish logo. Blame chicken
Long John Silver’s is known for its seafood, but it’d like to be better known for its poultry. So much so, that it just swapped the fish in its logo for a chicken. In time for national seafood month, Kentucky-based chain announced that it’s dropping the golden yellow fish illustration for a similarly styled chicken illustration. It’s also adding the words “Chicken” and “Seafood” to its lock-up. “Guests have been telling us for years that our chicken is a best-kept secret,” Long John Silver’s senior vice president of marketing and innovation Christopher Caudill said in a statement. “It’s time we let that secret out.” For now, the new logo shows up on the Long John Silver’s website and social media, and it’s expected to be on the wrap of the chain’s car at the South Point 400, a NASCAR Cup Series stock car race this Sunday in Las Vegas. The company didn’t indicate in its announcement whether the rebrand is permanent, nor whether it will also appear in store and on signage nationwide (Long John Silver’s did not respond to a request for comment). At the very least, it’s the latest example of a promotional brand transformation. Like Maxwell House temporarily rebrading as Maxwell Apartment or Lacoste releasing a limited-edition goat logo, Long John Silver’s new chicken logo is meant to communicate a pointed message: this is a fast food restaurant that does more than crab cakes and surf clams. The rise of fast-food chicken sales Founded in 1969 and named after a pirate on Treasure Island, Long John Silver’s was designed to bring seafood to the landlocked parts of the U.S. But diners in places like Des Moines and Denver aren’t necessarily looking for fish and shrimp these days—they are looking for chicken. Fast-food chicken sales now account for more than $53 billion in annual revenue for U.S. restaurants, according to data from Technomic, a market research firm. You can see its popularity reflected in menu items like Taco Bell’s limited-run chicken nuggets and Wendy’s new chicken tenders. To protect its turf in a time of rising competition, KFC is leaning into its origin story and mascot. For Long John Silver’s, the new look helps promote new chicken menu items, like chicken wraps and Nashville hot chicken, which the seafood the company says it’s testing at a new flagship location in Louisville, Kentucky. Chicken is “part of our heritage,” Caudill said, “so it deserves its rightful place on our logo, our menu, and our guests’ tables,” but he added that chicken is also “a big part of our future.” Previously owned by Yum! Brands, the parent company behind the chains KFC, Taco Bell, and Pizza Hut, Long John Silver’s was acquired for an undisclosed sum in March 2021, according to PitchBook. Long John Silver’s has closed more than 150 restaurants in three years. With the dual focus on seafood and chicken, though, Long John Silver’s is hoping to reverse its decline by making its stores their own self-contained combination locations. For longtime customers still drawn in by its fish and shrimp, the brand will still deliver, but by casting a wider net, it’s hoping to catch some of the growing number of chicken fans too. View the full article
-
Israeli military says Gaza ceasefire has started
Move sets into motion first step of Donald The President’s plan to end two-year warView the full article
-
The only leadership trait that really matters
For decades, MBA programs, leadership trainings, and consultancies have told us that effective leaders share a set of “essential competencies.” You know the lists: empathy, strategic vision, humility, charisma, psychological safety, communication skills. These ideas get repeated in boardrooms and promised in executive education programs. But if these competencies were truly essential, then the leaders we most admire should have them. The truth is, they often don’t. This never made sense to me. In addition to my writing and research, I’ve spent the past 15 years running a secret dining experience called the Influencers Dinner. We’ve hosted close to 4,000 Olympians, Nobel laureates, executives, astronauts, Grammy-winning artists, Oscar-winning directors, and even the occasional prime minister or princess. And what became clear, sitting across the table from these leaders, is that while all of them were wildly effective, there was no commonality in their skills. Some were quiet, others loud. Some thrived on collaboration, others preferred making decisions on their own. Yet each led organizations, movements, or creative projects that shaped the world. Look at the most impactful leaders you know and you see the same thing. Elon Musk is not known for humility or building consensus. Steve Jobs was not exactly famous for psychological safety. Yet both are considered among the most effective leaders of our time. So what explains it? The Psychology of Following: The Future Effect The only thing that defines a leader is that they have followers. And people follow for one main reason: We don’t relate to the present, we relate to the future we believe we have. Think back to high school. On Friday afternoons at 1 p.m., we were still stuck in class, but felt excited because the weekend was ahead. On Sunday nights at 6 p.m., we were free, but anxious, already anticipating Monday. The difference wasn’t the present, it was the future we expected. The way we feel about now depends on what we think tomorrow will look like. This is exactly how we respond to leaders. When we interact with someone who makes us feel there’s a better future ahead, we follow them. We don’t need to like them. We can even dislike them. But if they make us believe tomorrow will be better, we’ll follow and often forgive their flaws. So if you want people to follow you, ask yourself: How do they feel about the future when they interact with you? The Myth of Vision and Charisma Ask people why they follow leaders and you’ll often hear “vision and charisma.” But most leaders don’t have both. Many don’t have either. What they do have are a few super skills that are disproportionately strong. These super skills are so powerful that they convince people the future will be different and better. Here’s the point: don’t waste time trying to fit some generic leadership model. Instead, figure out the one or two strengths that make people feel optimistic about the future when they deal with you, and then lean into those. It’s not about being good at everything. It’s about being exceptional at something that makes others believe tomorrow will be better. The Catch: Leadership ≠ Effectiveness But here’s the problem. Getting people to follow doesn’t mean you’ll succeed. Crowds can follow someone straight into failure. You can gather a crew for the heist without knowing how to get away with it. Leadership explains why people gather. It doesn’t explain whether they succeed. For success, we need something else. Enter Team Intelligence If leadership gets the crew together, team intelligence determines whether they actually pull off the job. Team intelligence is not about IQ, degrees, or resumes. It’s about the habits and skills that make groups smarter and more effective together than they could ever be alone. IQ turns out to be a poor predictor of group success. Studies of basketball teams, for example, show that it isn’t the players with the highest salaries or raw talent who decide the outcome. It’s the quality of the coach. The coach aligns reasoning, manages attention, and makes sure resources are used well. Similarly, research shows that team intelligence has more to do with collaboration and communication than with the average IQ of team members. There are three pillars that determine whether a team thrives or fails: Reasoning: aligning on clear goals and purpose so debates lead to better solutions rather than power struggles. Attention: managing focus and communication so people feel safe enough to share ideas and challenge assumptions. Resources: surfacing hidden skills and networks within the team and making sure the right expertise is available at the right time. Implications for Leaders Across Sectors For leaders in business, government, education, or nonprofits, the lesson is simple: Stop chasing the illusion of being well-rounded. Instead, recognize your super skill, the thing that makes people feel tomorrow will be better. Then focus on cultivating team intelligence. When reasoning, attention, and resources are in place, your team doesn’t just follow. They actually succeed. Conclusion: The New and Better Future Leadership is not about checking boxes on a competency model. It’s about making people feel there’s a new and better future. That’s why people follow. But whether that following leads to real results depends on team intelligence. The challenge for leaders today is not to be more well-rounded, but to be more intentional. Lean into the super skills that inspire followership, and build the reasoning, attention, and resources that make teams effective. That’s how a vision becomes reality, and how a better future becomes possible. View the full article
-
Microsoft Office’s icons just got curvy, colorful upgrades
Microsoft just redesigned all of its Office icons to embrace the AI era, and, according to the company, that means ditching solid shapes for all things “fluid and vibrant.” The 12 new icons, which began rolling out on October 1, encompass all of Microsoft’s platforms from Outlook to Word Documents and Teams. This is the first time that Microsoft has updated the icons’ aesthetics in seven years, and the company’s designers have reworked every logo to be curvier, brighter, and more colorful. “Today, as we roll out refreshed icons for Microsoft 365 apps, small but significant design changes are a reflection and a signal,” a Microsoft blog post, published on October 1, reads. “As a reflection, they encapsulate how AI is shifting the discipline of design and the nature of product development.” Microsoft’s new icons are reflective of a broader trend in the tech world. Now that AI is ushering in the next major era of the industry, its biggest players are trying to figure out exactly how these expanded capabilities should be reflected in their branding. So far, one trend is clear: AI is becoming visually synonymous with a colorful gradient. Why Microsoft just redesigned its icons Microsoft, like many of its competitors, was a victim of the 2010s “blanding” trend, when companies across a variety of sectors were scrambling to trade their serif wordmarks for sans-serif and ditching 3D logos for ulta-simple 2D shapes. For tech companies, blanding and flat logo design was especially rampant, as simplified branding made it easier to design for different devices and apps (Google was arguably one of the first tech companies to go bland back in 2013). Microsoft’s last icon redesign effort was in 2018, when it adopted ultra-flattened versions of its 10 Office app logos. Per the recent blog post, those designs were intended to offer a connected look across platforms and devices in “the early days of apps that composed together and truly collaborative experiences.” Now, the post continues, workflows have undergone a major change thanks to AI: collaboration is no longer just human-to-human, but also human-to-AI. “With that paradigm shift come significant changes to the UX discipline itself and how we approach product making,” the post continues. It explains that, while longer cycles of development used to be followed with a reveal of big changes, AI models are allowing UX developers to make changes in continuous waves. “Research shows changes to iconography are almost always received as a signal for product changes and in an era of ongoing, smaller shifts, the icons should reflect that.” The flat logo is out. Say hello to the gradient logo Microsoft’s answer to that challenge has been to bring a tiny bit of life back into its icons. A broader color palette has allowed the company to give icons like the Outlook envelope, PowerPoint bubble, and Teams people more visual depth. Any sharp shapes and crisp lines have also been swapped with curved ones. “We’ve modernized Microsoft 365 icons to feel alive and approachable—soft curves, smooth folds, and dynamic motion that reflect Copilot’s brand,” says Gareth Oystryk, Microsoft 365’s senior director of consumer marketing. Perhaps most noticeably, Microsoft has implemented a gradient color palette across almost every icon. Word’s flat blue hues are now blue, navy, and purple; PowerPoint’s orange is accented with pink and red; and Excel’s green includes a hint of yellow. “Where gradients were once subtle, they’re now richer and more vibrant, featuring exaggerated analogous transitions that improve contrast and accessibility,” the post reads. “This shift makes the icons feel brighter, punchier, and more dynamic.” Gradients have long been a motif of choice for tech companies (see Instagram and Apple Music), but, more recently, they’ve become analogous with AI for companies that choose not to go the Open AI black void route. Microsoft’s own Copilot has embraced a gradient logo, alongside others including Apple Intelligence, Google Gemini, and Meta AI. Google recently reworked its iconic “G” to feature a gradient across all platforms, noting at the time that the move “visually reflects our evolution in the AI era.” This embrace of gradients is, to some extent, Big Tech’s safest answer to visualizing something as amorphous as AI. But it may also be evidence that the tech design pendulum is swinging away from blanding and back toward an earlier era of playful color and skeuomorphic icons. If “flat logos” were the hallmark of the digital era, it’s possible that gradient logos are becoming the symbol of the AI age. View the full article
-
Google told to loosen control over search by UK competition regulator
Big Tech company becomes the first to be designated with special status under strict new digital lawsView the full article
-
Nobel Peace Prize awarded to María Corina Machado
Win for Venezuela’s main opposition leader dashes the hopes of US President Donald The PresidentView the full article
-
Is ironing dead? The latest decline in dress code
Office dress code has been trending more casual for years, and the pandemic helped turn athleisure and sweatpants into business casual. And now, there’s a growing debate around one practice long thought to be standard for anyone wishing to look presentable and professional: ironing. In fact, many people on social media are saying they never iron anything—whether it’s work clothes or otherwise. “For science, how many of you still own an iron—the one for taking wrinkles out of clothing—AND know how to use it?” one Threads user recently asked. It’s a sentiment others have shared online from TikTok to Facebook. Naturally, the replies were divided. “I use mine weekly and I can’t imagine how anyone can look as though they haven’t just rolled out of bed without one,“ one user replied. “Do I own an iron? Yes. Do I know how to use it? Also yes,” replied another. “Have I used it at any time in the past 7 years? Hard no.” While it might be tempting to put the decline in ironing down the generational differences. Growing up during COVID with remote learning on Zoom from home for years, Gen Z has struggled with navigating dressier attire. But the reality is more complex. Just a few years ago, after all, headlines constantly churned about how millennials killed everything from napkins to mayonnaise, homeownership, and middle management. It is true, roughly 30% of 18- to 34-year-olds don’t own an iron and have never even touched one before, according to reports. Yet, the debate to iron or not to iron transcends generational divides—in some cases, uniting generations over a common cause. A screenshot on Reddit reads: “One main thing millennials can be proud of is that we collectively banished ironing clothes.” Responding to the post, one reply read: “Im GenX. I refuse to wear clothes that require high maintenance or ironing.” Another wrote: “Gen Z here (26) similar with me, I know how to iron but I very rarely do it cus I mostly don’t have too.” Modern easy-care fabrics, the invention of handheld steamers and wrinkle release spray, as well as shifting work culture that encourage less formal dressing, have turned a once essential appliance into a relic of a bygone era for some. As one response to a viral post by The Imperfect Mum read: “My mum once said she doesn’t remember the ’80s because she spent the entire decade ironing.” The rise of dual-career households means many simply don’t have the time, or the desire, to stand at the ironing board for hours on a Sunday ironing socks for the week ahead. This iron avoidance has led to the development of a number of ingenious coping mechanisms: Dark colors and synthetic fabrics hide wrinkles better. Dryers, or hair straighteners, can stand in for irons in a crunch. Leaving the house in slightly rumpled outfits is no longer the fashion faux pas it used to be. (Besides, the creases will probably relax by the time you get to where you need to be.) Still, there remain those who point blank refuse to leave the house in a wrinkled shirt, diligently hauling out the ironing board on the daily. And truly? Nothing says you have your life together quite like a crisp, crease-free shirt. View the full article
-
What fog and gravity can teach us about urbanism
I don’t know if urbanism is science or art, but I do know its outcomes are best with a dose of creativity. There’s plenty to learn from the giant leaps in art and science to improve your urbanism advocacy. Happy, healthy communities aren’t made from being stuck in a bygone era. The value of fog Impressionist painters didn’t discover fog. It was always there, but it wasn’t something people were discussing much in the early 19th century leading up to the impressionists and tonalists. Each of those artistic movements created illusions of reality with familiar scenes. James McNeill Whistler was an influential figure and one of the original tonalists. Here’s what he had to say about finding inspiration from natural elements previously left off the canvas: “And when the evening mist clothes the riverside with poetry, as with a veil, and the poor buildings lose themselves in the dim sky, and the tall chimneys become campanili, and the warehouses are palaces in the night and the whole city hangs in the heavens, and fairy-land is before us—then the wayfarer hastens home; the working man and the cultured one, the wise man and the one of pleasure, cease to understand, as they have ceased to see, and Nature, who, for once, has sung in tune, sings her exquisite song to the artists alone.” Claude Monet is probably the most famous of the impressionist bunch. Monet’s focus shifted from clear objects to the effects of atmosphere and light, after he stumbled into the London fog. Critics would argue about deeper meanings, whether impressionism was creating a dreamy or nightmarish mood for London, angelic or demonic. But the meaning (or lack thereof) isn’t what got me thinking about these 19th-century art movements. It’s the idea that something was always there and it took artists to draw the attention of normies to it. The Houses of Parliament (Effect of Fog)Wiki Commons The influence of gravity Some 300 years before Monet and Whistler, Nicolaus Copernicus was making the shocking case that Earth and other planets revolved around the sun, rather than Earth being the center of everything. He didn’t get everything right. Copernicus had no concept of gravity, so he wasn’t clear on how the celestial blobs swirled around each other or why they all orbited the sun. Not many decades later, Isaac Newton watched an apple fall out of a tree. He organized his math homework and philosophy into laws of gravity that were eventually used to describe planetary motion. In hindsight, it seems almost childish to talk about major leaps in art and science because the advancements seem so obvious. Of course this foggy picture with shadowy figures in motion makes me feel uneasy. Of course gravity makes things fall to the ground. Great leaps forward Generations ahead of us will probably read stories about our era that begin like this: “Once upon a terrible time, America’s most educated city planners were convinced that cities optimized for motor vehicle traffic would be the safest and most prosperous.” Things that don’t even cross our minds today as possible outcomes will be boring in their obviousness later. Consider space: In 1960, science fiction was the only reasonable place for stories about a group of humans traveling beyond our atmosphere, circling the globe, and returning safely in their ship. In 1961, Project Mercury launched multiple such voyages, making all sorts of discoveries about how people and machines function in weightless environments. Consider music: In 1965, anyone interested in hearing a new band had to either listen live to one of a few radio stations or suffer through a friend’s attempt to sing. In 1966, the portable cassette recorder was introduced, making it possible for anyone to make and play recordings without cables and microphones. Consider city planning: In 2022, land use planners and politicians still worked under the assumption that the social and physical harms of zoning were necessary and would always exist. In 2023, a brave local planning department liberated its community from the crushing burdens of zoning, becoming a model for others to follow. (Maybe.) There’s no reason to always be operating from a yesteryear mindset with issues like affordable housing, traffic engineering, parks planning, and intersection design. Challenge what others take for granted. Open your eyes to the hidden potential of your block, your street, your neighborhood, and your city. View the full article