Skip to content




All Activity

This stream auto-updates

  1. Past hour
  2. Google is updating how Google Ads paces budgets for campaigns using ad schedules, shifting toward full monthly spend targets regardless of how many days ads actually run. What’s changing. Starting June 1, campaigns will pace toward the full monthly budget limit (30.4x the daily budget), even if ads are only eligible to run on certain days. Previously, pacing was typically based on the number of active days in the schedule. What’s not changing. Daily and monthly caps remain the same. Campaigns still won’t exceed 2x the daily budget in a single day or 30.4x over a month, and ads won’t serve on disabled days. Why we care. Advertisers using limited schedules — like weekdays only or specific hours — may see spend accelerate, as Google now aims to hit the full monthly cap instead of scaling down on active days. Zoom in. This means campaigns with fewer serving days can spend more aggressively on those days. For example, if ads run only half the month, Google can hit the daily max each day without needing to pull back elsewhere — and still stay under the monthly cap. Between the lines. Google is prioritizing full budget utilization over evenly distributed spend, giving its systems more flexibility to capture demand when campaigns are eligible to run. What to watch. Advertisers with tight schedules may need to revisit budgets and performance expectations, as spend could concentrate more heavily on active days. Bottom line. Budget pacing is becoming less about when ads run — and more about ensuring the full budget gets spent. First seen. Several advertisers mentioned receiving the comms from Google but from Google Ads Coach Jyll Saskin Gales, we got a clarification of what the update means and what isn’t changing on LinkedIn. View the full article
  3. Attention is fragmenting further every day as the platforms providing information continue to multiply. There are new players on the scene, like AI search, while companies build proprietary spaces through social networks and communities. Smaller spaces pop up daily through vibe-coded apps. Many of these platforms are noisier than ever, with everyone demanding our attention at once. We’re drowning in information, and trust is eroding in sources like search engines and social media. We still use these platforms for research, but go elsewhere to validate what we find and make decisions. We’re shifting back to a source we’ve trusted since the beginning: other people. That means showing up across multiplying platforms and in as many people-led sources as possible. Search is a trust experience Rachel Botsman is a leading expert and author on trust in the modern world. Botsman defines trust as: “A confident relationship with the unknown.” I’ve read tons of different definitions of trust, but this is by far my favorite. It’s the simplest and touches on the core component of dealing with the unknown or uncertainty. We don’t need trust when outcomes feel certain. We need trust when we’re dealing with the unknown. Searching for information is what humans do when they’re uncertain. There are three trust layers that occur every time we search for information: Self-trust (I’m uncertain.): I don’t trust that I have the information I need to make a decision at this moment in time. Platform trust (Where I trust to search for answers.): Which platform, community, or real-world space do I trust to find answers to my questions? Source trust (Whose or what information I act on.): Do I trust this enough to believe it, click on it, buy it, let it guide me, or change my mind? People can absolutely skip platform trust and jump directly here. Searching for information is a trust experience from start to finish. It’s a human behavior, and, as we’ll discover, the best way to support human behavior is through other humans. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with An example of my own search journey to find a trusted answer Here’s what a recent search journey of mine looked like when I was interested in buying a new pair of shoes. I started with AI tools and did some low-trust research, getting a list of options that met my requirements from ChatGPT and cross-referencing that list with Claude’s output. Then I wanted a sense of pricing and delivery timelines (high trust), so I quickly read through reviews while I was still working with the AI outputs (low trust). I searched Amazon for the options surfaced by ChatGPT and Claude, read reviews, got pricing, and noted who ships the quickest. From there, I moved on to Google and found my medium-trust people sources. I checked Reddit for brand and model commentary, read third-party articles on running sites and from running influencers, and watched YouTube video breakdowns. Then I got bombarded with low-trust advertising on social media, seeing retargeting ads everywhere. Finally, I turned to my high-trust people sources. I asked a trusted running community, a neighbor I often see running, and my dad, a former marathon runner. I also went to a running shop and spoke with the sales team. Search journeys now span dozens of platforms and sources Yext’s 2025 research of 2,237 global consumers found more platforms getting used in a single search journey: Approximately 75% of consumers use new search tools more today than they did one year ago. Just 10% trust the first result, while 48% of consumers cross-check answers across platforms. These results very much mirrored my personal search experience. I hit roughly 65 sources in my search journey: Two AI tools, hitting ~10 links in each. Amazon, hitting ~15 products with reviews. Google, scanning ~10 Reddit threads, approximately five third-party sites, and five YouTube videos. Social media, seeing ~10 retargeting ads. Community, receiving seven direct replies. Conversations, three directly with other people. In a similar vein, Expedia’s The Path to Purchase research found that huge amounts of source content are now consumed by travelers planning a trip. In the 45 days prior to booking travel, users spend an average of 303 minutes viewing ~141 pages of travel content. Of my 65 sources, 45 were people-led. This trend can also be seen in professional decisions via the Censuswide – Global Professionals sentiment study (commissioned by LinkedIn) data, which shows 43% of people rate their professional network as their most trusted source, ahead of search engines and AI tools. And the 2026 Edelman Trust Barometer shows a general trend of uncertainty rising and people placing their trust in the people closest to them: Source: 2026 Edelman Trust Barometer Time and time again, we see that when people feel uncertain and need trusted advice, they often turn to others. So how do you turn trust into visibility? During someone’s search journey, you ideally want to show up in: All the platforms they use to find information. As many people-led sources as possible. That sounds pretty overwhelming. To make this workable, you need a playbook that reverses the order: Get mentioned in people-led sources often (by building genuine trust with these people). As a result of these mentions, show up in the major search platforms as they continue rewarding people-led sources. If we optimize at the people layer, the platform layer follows. Build trust, earn mentions, and get visibility. Back to my shoe-purchasing journey. Many folks have taken to social media and review sites to talk about Adidas Terrex (the shoes I finally purchased after my trust-seeking journey), so they were highly visible in all my touchpoints. This means that Adidas is actively engaging in trust-building activities. Adidas has its own running club, events, and communities. They’re engaging with people. Here’s an example of a recent event where they collaborated with the Underground Fan Club to support more women getting into trail running. People are mentioning their brand and products. This single event had hundreds of posts on Instagram from the participants and attendees. Multiply that by their other events and community initiatives, and you can see how their visibility quickly adds up. Plus, they’re appearing via hashtags, account tags, and mentions on social media platforms like TikTok more generally: Adidas Terrex is also getting mentioned in forums — there are full Reddit threads devoted to advice on these shoes. Their people-led source mentions are reflected in AI search platform results: You’ve seen the research: Profound analyzed more than 4 billion AI citations and 300 million answer engine responses and found the data showed that AI search platforms like ChatGPT, Google’s AI Overviews, and Perplexity systematically prioritize human conversation to build trust. AirOps analyzed over 5.5 million LLM responses across ChatGPT, Perplexity, Gemini, and Google AI Mode, and their data showed the top three cited domains drove brand mentions from community and user-generated content platforms. When you genuinely earn the trust of people willing to mention you positively of their own accord, you also capture visibility within search platforms. Because visibility is a byproduct of trust. Get the newsletter search marketers rely on. See terms. Where to go to earn people’s trust Relationships are the bedrock of trust, and there are plenty of places you can go to start building them. These are a few people-led places you can start with: Communities: Online and in person. Events: Conferences and meetups. Social media: LinkedIn, Instagram, TikTok, and similar platforms. Forums: Reddit and Quora. Look for people-led places with the components listed below. The stronger they are in these characteristics, the higher the trust: Where smooth, two-way conversations happen in real time. Where you have the ability to show up consistently. Where your audience gathers for specific, niche reasons and support. Where people are not anonymous and can show up as themselves (not personas). Here’s a general guide for how these environments, when highly engaged, are typically trusted: Trust-building componentsCommunitiesEventsSocialForumsTwo-way conversationsHighHighLowMediumThe ability to show up consistentlyHighMediumHighMediumPeople gather for specific, niche reasonsHighHighLowMediumWhere you can be yourself (not anonymous)HighHighMediumMedium Communities and events require lengthier time commitments and higher financial investment, but the trust-building components are very strong. Entering these spaces gives you more of the tools you need to build both relationships and trust. Social media and forums have lower barriers to entry, but the trust-building components are weaker. You can find the places you want to start with by: Directly surveying your customers and audience on where they spend time. See who’s frequently mentioned in your industry’s newsletters, podcasts, and other publications. Perform a search in your search platform of choice. How to engage in trust-building spaces People are seeking information to help them gain confidence in what they’re unsure about. They’re seeking help, and help builds trust. This means helping is your primary objective – not building brand awareness, pushing folks through your consideration funnel, or selling. Helping people. Start by listening, not talking Once you’ve identified your places, don’t rush in and start talking about yourself, your brand, or your challenges. Listen first. This is a two-part process: What does ‘helpful’ look like in this space? This is about understanding why people gather in this space — what they get out of it. What high-level needs or wants are getting met that people continue coming back? These typically don’t change much over time. Maybe they’re looking for connection, education, amplification, or inspiration. Figure that out, and then cross-reference it with what you have to offer. Find the intersections that make sense for you and identify the ways in which you can offer support. What topics are people focused on? This is about understanding what’s “trending” right now for folks in the space. What immediate needs or wants are getting met at the moment? These typically fluctuate. Listen. Find your intersections. Figure out what you can help with. Engage to build trust This will start with 1:1 conversations in community Slack groups, at events, or in the comments of social media and forums. Trust takes time to build. There are no shortcuts. Show up as yourself. You’re not your brand; you’re a person behind your brand. People want advice from real people, and if you begin by labeling yourself as a brand representative advocating for your product, it’s game over. Show up consistently, have these conversations, provide help on a 1:1 basis, and keep track of what’s actually helping. While trust takes time to build, your learnings can help you scale how you help based on real audience insights. Once you have a good sense of that, you can take the most frequently helpful themes and build out systems or assets that scale your ability to help. Turn conversations into scalable trust These assets may not build as strong a level of trust as your 1:1 conversations. Those 1:1 conversations with the right audience will have the most trust and the most depth. But if you focus your scaled assets on helping people become who they want to be, it will greatly strengthen trust in your 1:many initiatives more than your typical “how to do x” content. So take a deeper look at the pain points mentioned in your conversations and ask, “Who is this person trying to become?” Then build an asset from the ways you’ve helped those folks in 1:1 conversations. Create a mention power-up that helps people showcase their desired identity and who you helped them become. Something that proves their credibility and that they’re excited to share! Here are a few examples of what this playbook could look like for different audiences: AudienceHigh-level needTimely needScaled help assetMention power-upProfessionalsAmplificationDesire to grow personal brandGuest-posting programThe content is the power-up! They’ll share and tag you.ProfessionalsOpportunitiesNew job roleSkill training and job boardShareable certification for skill-training completionMusiciansEducationWanting to learn to play drumsVideo library of drum lessonsPersonalized “I’m a drummer” social imageCraftersAdviceCan’t find sustainable materialsCurated resource of eco-friendly materials Citable asset built with “[your brand’s] eco-friendly resources”ReadersInspirationDesire to break into a new genreQuiz that helps them decide Sharable quiz output boldly defining their new genreBudgetersEducationWhat to cut back spend onBudget template and trackerSharable “I saved $x with [your brand] asset” What does this actually look like in action? Over the past few years, I have transitioned my career from marketing to community building. I’ve learned the power of shifting my mindset from selling to helping. And I’ve seen brands use the above playbook to earn visibility and real business impact. In our community, we partner with an SEO SaaS platform that uses this playbook powerfully. We’ve seen them listen to what it means to be helpful in their community — people want opportunities to be amplified. We’ve seen them show up consistently — their marketing manager has 400+ messages in our Slack community. We’ve seen Jojo have tons of 1:1 conversations offering help. We’ve seen Jojo continuously show up as herself in these helpful answers and in general as a valued member of the community. And we’ve seen those 1:1 connections pay off in terms of visibility on the content itself as their sharable mention power-up. They then did the work to build their scaled asset of help. First, by listening through surveying members: Identifying the core challenges that people had within this topic: And further boosted their trust by collaborating with the community and featuring community members within their scaled asset. Again, they reaped the rewards of visibility with their shareable mention power-up. While earlier I told you to go in without a sales mindset, the beauty is that the trust you build can grow into just that: real business impact. Our SEO SaaS partner has earned £50,000+ in new annual revenue through the partnership so far. This stuff works when you find the right space, listen, learn, and consistently show up to help. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with Building trust is a long-term visibility bet Trust will always be a throughline in how people search for information. When you make building trust an ongoing part of your strategy, you prepare your business beyond any single platform or system. You’ll show up in AI search today and whatever comes next tomorrow. Make trust the priority, and visibility follows. That’s how you move from chasing algorithms to building something that lasts. View the full article
  4. Populist party promises panel discussions with ‘leading voices from across the economy’ to tempt wary chief executivesView the full article
  5. New report finds outdated systems are limiting insight, slowing workflows, and putting firms at a competitive disadvantage. From CPA Trendlines Sponsored by Ace Cloud Hosting Go PRO for members-only access to more CPA Trendlines Research. View the full article
  6. Investor concerns that technology could hit companies’ business models will derail exits, says Swedish groupView the full article
  7. Federal Council sets out plans for banking reform after months of lobbying by country’s biggest lenderView the full article
  8. Today
  9. President Donald The President said Tuesday the United States was indefinitely extending its ceasefire with Iran — a day before it was to expire — as a new round of peace talks was on hold. The announcement appeared to ease fears that the fighting, which had shaken energy markets and the global economy, would promptly resume. Pakistan had planned to host a second round of talks, but the White House put on hold Vice President JD Vance’s planned trip to Islamabad as Iran rebuffed efforts to restart negotiations. Iran has not yet responded to The President’s announcement of the ceasefire extension. Both countries have warned that, without a deal, they were prepared to resume fighting. Pakistan scrambles to get US and Iran to negotiate Pakistani leaders, including Prime Minister Shehbaz Sharif, worked intensively to get both sides to agree to a second round of ceasefire talks, according to two officials who spoke on condition of anonymity because they were not authorized to speak to the media. Sharif later thanked The President for his “gracious acceptance” of Pakistan’s request, saying the ceasefire extension would allow ongoing diplomatic efforts to proceed. Iranian Foreign Ministry spokesman Esmail Baghaei told Iran’s state TV there has been “no final decision” on whether to agree to more talks because of “unacceptable actions” by the U.S., apparently referring to the U.S. blockade of Iranian ports. In a Truth Social post announcing the ceasefire extension, The President said the U.S. would continue the blockade. As Vance put on hold a return trip to Islamabad, Pakistan’s capital, The President’s special envoy Steve Witkoff and son-in-law Jared Kushner were expected in Washington on Tuesday afternoon for consultations about how to proceed, said a U.S. official who spoke on condition of anonymity to discuss internal administration deliberations. The official cautioned that The President could change his mind on negotiating with Iran at any time, and declined to predict what would happen. The official said The President has options short of restarting airstrikes. Both sides remain dug in rhetorically Before announcing the ceasefire extension, The President had warned that “lots of bombs” will “start going off” if there’s no agreement before the Wednesday deadline, while Iran’s chief negotiator said that Tehran has “new cards on the battlefield” that haven’t yet been revealed. A senior commander in Iran’s Islamic Revolutionary Guard Corps threatened to destroy the region’s oil industry if war with the U.S. resumes. “If southern neighbors allow the enemy to use their facilities to attack Iran, they should say goodbye to oil production in the Middle East region,” Gen. Majid Mousavi told an Iranian news site. Strait of Hormuz control key to negotiations Iran’s envoy to the United Nations said Tuesday that Tehran has “received some sign” that the U.S. is ready to stop its blockade of Iranian ports. Ambassador Amir Saeid Iravani said ending the blockade remains a condition for Iran to rejoin peace talks. When that happens, he said, “I think the next round of the negotiations will take place.” The U.S. imposed the blockade to pressure Tehran into ending its stranglehold on the Strait of Hormuz, a key shipping lane through which 20% of the world’s natural gas and crude oil transits in peacetime. Iran’s grip on the strait has sent oil prices soaring. Brent crude, the international standard, was trading at close to $95 per barrel on Tuesday, up more than 30% from Feb. 28, the day that Israel and the U.S. attacked Iran to start the war. Before the war began, the Strait of Hormuz had been fully open to international shipping. The President has demanded that vessels again be allowed to transit unimpeded. Over the weekend, Iran said that it had received new proposals from Washington, but also suggested that a wide gap remains between the sides. Issues that derailed the previous round of negotiations included Iran’s nuclear enrichment program, its regional proxies and the strait. The US says its forces board sanctioned oil tanker On Tuesday, the U.S. said its forces boarded an oil tanker previously sanctioned for smuggling Iranian crude oil in Asia. The Pentagon said in a social media post that U.S. forces boarded the M/T Tifani “without incident.” The U.S. military did not say where the vessel had been boarded, though ship-tracking data showed the Tifani in the Indian Ocean between Sri Lanka and Indonesia on Tuesday. The Pentagon statement added that “international waters are not a refuge for sanctioned vessels.” The U.S. military on Sunday seized an Iranian container ship, the first interception under the blockade. Iran’s joint military command called the armed boarding an act of piracy and a violation of the ceasefire. Pakistan hopeful talks will proceed Pakistani officials have expressed confidence that Iran will also send a delegation to resume the talks — the highest-level negotiations between the U.S. and Iran since the 1979 Islamic Revolution. The first round April 11 and 12 ended without an agreement. Pakistan said Foreign Minister Ishaq Dar met Tuesday separately with the U.S. and China’s top diplomats in Islamabad. China is a key trading partner of Iran. Security has been tightened across Islamabad, where authorities have deployed thousands of personnel and increased patrols along routes leading to the airport. U.N. Secretary-General António Guterres said the ceasefire extension was “an important step toward de-escalation” that will create “critical space for diplomacy and confidence-building between Iran and the United States,” according to his spokesman, Stephane Dujarric. Talks between Israel and Lebanon are to resume In Lebanon, the Iran-backed militant group Hezbollah said in a statement it had fired rockets and drones at Israeli forces for the first time since 10-day truce took effect last Friday “in response to the blatant and documented violations” by Israel. Those violations, it said, included “attacks on civilians and the destruction of their homes and villages in southern Lebanon.” The Israeli army said it responded by striking the group’s rocket launcher. Israeli officials have said they intend to maintain a buffer zone in southern Lebanon — an area that includes dozens of villages whose residents have not been allowed to return. Historic diplomatic talks between Israel and Lebanon are to resume on Thursday in Washington, an Israeli, a Lebanese and a U.S. official said. All three spoke on condition of anonymity to discuss the behind-the-scenes negotiations. The Israeli and Lebanese ambassadors met last week for the first direct diplomatic talks in decades. Israel says the talks are aimed at disarming Hezbollah and reaching a peace agreement with Lebanon. Fighting between Israel and the Iran-backed Hezbollah broke out two days after the U.S. and Israel launched joint strikes on Iran to start the war. In Lebanon, the fighting has killed more than 2,290 people. Since the war started, at least 3,375 people have been killed in Iran, according to authorities. Additionally, 23 people have died in Israel and more than a dozen in Gulf Arab states. Fifteen Israeli soldiers in Lebanon and 13 U.S. service members throughout the region have been killed. Associated Press writers Michelle L. Price, Aamer Madhani and Darlene Superville in Washington; Samy Magdy in Cairo; David Rising and Huizhong Wu in Bangkok; Julia Frankel in New York; Bill Barrow in Atlanta, Edith M. Lederer and Farnoush Amiri at the United Nations; Russ Bynum in Savannah, Georgia, and Hannah Schoenbaum in Salt Lake City contributed to this report. —Munir Ahmed, Jon Gambrell and Matthew Lee, Associated Press View the full article
  10. From Experiences to Transformations: The Future of Value Creation. with Rory Henry The Holistic Guide to Wealth Management Go PRO for members-only access to more Rory Henry. View the full article
  11. From Experiences to Transformations: The Future of Value Creation. with Rory Henry The Holistic Guide to Wealth Management Go PRO for members-only access to more Rory Henry. View the full article
  12. "We’re growing in a way that is strategic, and that we’re preparing our people to meet the demands of that growth.” MOVE Like This With Bonnie Buol Ruszczyk For CPA Trendlines Research Go PRO for members-only access to more Bonnie Buol Ruszczyk. View the full article
  13. "We’re growing in a way that is strategic, and that we’re preparing our people to meet the demands of that growth.” MOVE Like This With Bonnie Buol Ruszczyk For CPA Trendlines Research Go PRO for members-only access to more Bonnie Buol Ruszczyk. View the full article
  14. Here is a recap of what happened in the search forums today...View the full article
  15. You can now do in 20 minutes what used to take a full afternoon. Feed two Semrush exports into Claude or ChatGPT, and you’ll get a polished competitor analysis – complete with topic clusters, gap tables, and prioritized briefs. The output looks convincing. The tables are clean. The recommendations sound confident. That’s the problem. AI can organize and summarize data quickly, but it can’t make strategic decisions. Without the right workflow, prompts, and validation, you risk acting on insights that sound right but lack depth. Used correctly, though, AI can surface meaningful patterns – revealing differences in topical depth, content coverage, and authority signals that influence search visibility. Here’s a walkthrough of a real two-competitor analysis using Claude and Semrush data, showing how to turn fast AI outputs into a reliable strategy. You’ll get a repeatable workflow, tested prompts, and a validation checklist to catch common mistakes, along with a clear sense of where to trust AI — and where to rely on your judgment. AI won’t run a competitor analysis for you. But it can compress the manual work — clustering, pattern matching, and synthesis — so you can focus on interpreting intent, validating opportunities, and deciding what’s worth pursuing. Note: The sites in this analysis are real but anonymized. Site Y is our client, while Competitors A and B are direct competitors in the same niche. The data is from real Semrush exports pulled in early 2026. Start with data, not a prompt Whenever possible, start by exporting data from your SEO tool. Don’t ask an AI assistant to guess what an SEO tool can tell you. Otherwise, you assume your AI assistant is a measurement tool. Although it isn’t, it’ll try its best to respond to your request. This often looks like plausible-sounding traffic estimates, keyword lists, and competitive assessments that are partially or entirely fabricated. Here’s what we exported and why each piece matters. Export 1: Organic Research > Pages (top 100 by estimated traffic) This report tells you which pages are winning. Key columns include the URL, estimated traffic per page, number of ranking keywords per page, the intent breakdown (commercial, informational, navigational, transactional), and the traffic change column that shows momentum. For example, a page pulling 14,500 visits from 1,632 keywords is a different asset from a page pulling 400 visits from 12 keywords. The intent split tells you why that traffic matters. Export 2: Organic Research > Positions (top 100 keywords by traffic) This export tells you which keywords are winning. Key columns here are keyword and position, search volume, keyword difficulty , search engine results page (SERP) features (image packs, video carousels, and People Also Ask), and keyword intent tags. Instead of telling you which URLs perform best, this report reveals which search queries drive the most traffic. You need both reports for a complete picture. The export checklist For each competitor and for your own site, pull: Semrush Organic Research > Pages, top 50-100, sorted by traffic. Semrush Organic Research > Positions, top 100-500, sorted by traffic. Semrush Keyword Gap report (optional). Screaming Frog crawl with URLs, titles, H1s, word count, crawl depth, and internal links. This optional report adds structural context (like how deep pages are buried in the site architecture) that the Semrush exports don’t include. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with Conduct a 20-minute competitive review Next, feed your exports into your AI assistant. Ask it to do three things: classify, cluster, and compare. Topic taxonomy (per site) Here’s the prompt I used: I'm going to give you a Semrush Organic Pages export for a website. Each row is a URL with its estimated organic traffic, number of ranking keywords, and intent breakdown. Please: 1. Assign each URL to a topic category (e.g., "Product - Roof Racks," "Editorial - Buying Guides," "Support - Technical," "Category - Inventory") 2. Assign a page type: Homepage, Product Page, Category Page, Editorial/Guide, Blog Post, Support/Info, Landing Page, or Other 3. Create a summary table showing: topic category, number of pages, total traffic, and dominant intent Rules: - Base classifications on the URL path and any context available. Do NOT guess traffic numbers or keyword data. Use only what's in the export. - If a URL is ambiguous, flag it as "needs manual review" rather than guessing. - Group similar topics (e.g., don't create separate categories for "off-road accessories" and "off-road bumper kits." Cluster them). - After classifying, list any URLs where you're less than 80% confident in the classification. I'll verify those manually. Here's the data: [PASTE PAGES EXPORT] For Site Y, Claude identified seven topic clusters across 100 pages. Here’s the summary: Topic ClusterPagesTrafficDominant intentHomepage/Brand314,651Mixed (commercial and informational)Buying guides and comparisons25~10,600Informational and commercialRoof racks and cargo (product)2~5,100Commercial and transactionalBumpers and armor (product)38~2,300CommercialInstallation and how-to content4~1,300InformationalInventory/Category4~540TransactionalOther (brand, manufacturer, thin)24~1,300Mixed Even before comparing competitors, this taxonomy tells a story. Our client’s organic traffic is driven more by editorial content (buying guides and comparisons) than by all product pages combined. In fact, a single buying guide pulled 7,336 visits on its own, and the top product page drove 5,021. That editorial strength is both a strategic asset and a vulnerability, since editorial rankings can be more volatile than product page rankings. Competitor comparison Once you’ve created a taxonomy for each site, use this prompt to compare them: I now have topic taxonomies for three competing sites in the same niche. I'm going to give you the summary tables for all three. Please: 1. Build a comparison table showing how each site's traffic distributes across topic categories 2. Identify each site's "content strategy signature": what type of content drives the majority of their organic traffic 3. Flag any categories where one site dominates and the others are weak or absent 4. Note the traffic concentration: what percentage of each site's total traffic comes from their top 3 pages Rules: - Use only the data provided. Do not estimate or infer traffic for categories not present in a site's export. - If a category doesn't exist for a site, mark it "Not present" rather than zero. We don't know if they have content there, only that it doesn't appear in their top 100. Site Y taxonomy: [PASTE] Competitor A taxonomy: [PASTE] Competitor B taxonomy: [PASTE] When we used this prompt, Claude revealed three completely different strategies from the same niche. Site YInfo/support pages (60 of the top 100)Competitor BContent strategyEditorial-ledUtility/support-ledProduct page-ledTop content typeBuying guides and comparisonsInfo/support pages (60 of top 100)Product pages and category pagesNon-homepage hero pageTow capacity and fitment calculator (7,336 visits)Bolt pattern lookup guide (1,245 visits)Off-road bumper category (3,200 visits)Traffic concentration (top three)75.3%81.2%71.8%Estimated traffic (top 100)35,6817,01711,093MomentumGrowing (+1,743 net)Flat (-264 net)Declining (-1,525 net) Manually developing this comparison could require hours of spreadsheet work between categorizing 300 URLs, building pivot tables, and trying to spot patterns across three tabs. But Claude did it in minutes. The pattern recognition alone (three completely different strategies from three sites selling in the same market) is genuinely valuable output. The numbers show that Site Y pulls five times the organic traffic of Competitor A and three times that of Competitor B, despite all three competing in the same space. Competitor A’s second-highest traffic page is a bolt pattern guide on a support subdomain. Competitor B is losing ground fast, with its top category page dropping by 1,184 visits. If you’re running a competitive analysis and you don’t spot patterns like these, you’re missing the strategic story behind the data. Apply human judgment If you were to stop after generating the clusters and comparison chart, you’d have a plausible-looking competitive analysis. But the AI-generated output needs human intervention before you make any strategic decisions. Check the classifications Spot-check 10-15% of classifications by visiting the URLs. Correct the taxonomy, and then re-run the comparison. This turns an 85% accurate first draft into one with 95% or higher accuracy. The “confidence flag” line in the prompt (“list any URLs where you’re less than 80% confident”) saves you from having to guess which ones to check. If you skip this step, the misclassifications can distort your entire competitive profile. For example, when I checked Claude’s page classifications against the actual live pages, roughly 15% needed correction. It tagged a product comparison page as a blog post. It classified a regional landing page as a category page. And it lumped an FAQ page into the “Other” category even though it served as the site’s primary buyer’s guide for a specific product line. These misclassifications were the kind of accidental calls that come from categorizing URLs by path structure alone, without seeing the page content. For example, if a URL path says /blog/best-off-road-accessories/, AI assistants will call it a blog post even if the page functions as a commercial comparison guide. Consider the intent AI assistants can surface data points in seconds, but they can’t make strategic calls for you. Interpreting the data requires understanding your client’s business model, their authority level, and their content capacity. I’ve seen teams burn an entire content sprint on high-volume informational keywords that drove plenty of traffic and zero leads. If the intent doesn’t match your business goals, the volume is irrelevant. For example, Competitor A’s second-highest-traffic page is a bolt pattern lookup guide, pulling 1,245 visits per month. Claude flagged this as a content strategy gap for Site Y, since our client had no equivalent utility content. While this is technically correct, it’s strategically misleading. The bolt pattern guide targets purely informational intent. So, the page builds authority and earns links, but it’s not a commercial driver. While it can be helpful to create utility content like this, it should be a steady background effort, not a priority sprint. The commercially relevant gaps (product categories, buying guides) come first. Use this prompt fix: For each opportunity you flag, check the intent breakdown from the Semrush data. If more than 60% of the traffic is informational or navigational intent, flag it separately as "authority builder, not direct conversion driver" so I can prioritize accordingly. Compare the SERP reality vs. the ranking position AI assistants work from the position numbers and volume data in your SEO reports. They don’t know what the SERP looks like. For example, Claude saw that Site Y ranks Position 3 for “off-road roof rack” (22,200 monthly searches, driving 1,443 visits) and treated it as a straightforward optimization opportunity. Push the page to position one, and capture more traffic. Simple. But in reality, the SERP is packed with rich features: popular products, an image pack, and People Also Ask. The traditional organic blue links appear barely above the fold on desktop and well below the fold on mobile. Ranking in position one likely wouldn’t deliver the traffic increase you’d normally expect from a 22,200-volume keyword because the SERP features absorb most of the clicks. For your top five or 10 priority keywords, do a manual SERP check. If the page is dominated by shopping carousels and video results, then a traditional organic push may not be the right play. Instead, a product feed optimization or video content strategy might be more effective. Get the newsletter search marketers rely on. See terms. Do a gap analysis Your SEO tool already has a keyword gap report. But a raw list of missing keywords isn’t a strategy. Use it as a starting point. Then, let AI clusteri those gaps into themes, tiering them by intent and business relevance and turning raw gap data into prioritized actions. Start with the tool data We pulled two Semrush Keyword Gap reports comparing Site Y against both competitors. They revealed: Missing keywords: 217 keywords where both competitors rank and Site Y doesn’t appear at all. Combined search volume ~49,700/month. Weak keywords: 106 keywords where Site Y ranks but gets outperformed by both competitors. Combined search volume: ~33,650/month. Feed the gap data to AI Use this prompt with your AI assistant: I'm going to give you two Semrush Keyword Gap reports: 1. MISSING: keywords where both competitors rank and Site Y doesn't 2. WEAK: keywords where Site Y ranks but competitors outrank us Each row includes: keyword, intent tags, search volume, keyword difficulty, CPC, and the ranking position for each site. Please: 1. Cluster the keywords into thematic groups (e.g., "bumpers," "roof racks," "overlanding gear," "light bar kits," "torque specs/fitment"). A keyword can only belong to one cluster. 2. For each cluster, provide: number of keywords, total search volume, dominant intent, and average keyword difficulty. 3. Separate the clusters into tiers based on intent: - Tier 1 (Commercially relevant): Clusters with predominantly commercial or transactional intent that align with the site's core product/service offering - Tier 2 (Adjacent commercial): Clusters that are commercially relevant to the broader market but may not be the site's primary product focus - Tier 3 (Authority builders): Clusters with primarily informational or navigational intent that build topical authority but are unlikely to drive direct conversions Note: I will review the tier assignments and adjust based on business model fit. AI should make its best guess and flag any clusters where the tier assignment is uncertain. 4. Within each tier, sort by combined search volume 5. Flag any keywords that are branded competitor terms (e.g., a competitor's product or brand name). These are generally not pursuable gaps 6. For the WEAK keywords, separate into "close wins" (Site Y in positions 1-10) vs. "long shots" (Site Y in positions 50+) Rules: - Use ONLY the keywords in these exports. Do not suggest keywords not present in the data. - If intent data is missing or ambiguous, mark it "verify manually" rather than guessing. - Do not invent search volume or ranking data. If a field is empty, say "not available." MISSING keywords: [PASTE] WEAK keywords: [PASTE] When we used this prompt with Claude, clear thematic clusters emerged from the 217 missing keywords: ClusterKeywordsCombined volumeDominant intentClaude’s tierBumpers / skid plates30+~12,000/moCommercial1Roof racks / cargo systems10+~8,000/moCommercial1Winches (for sale)15+~5,500/moTransactional1LED light bar kits12+~3,200/moCommercial1Overlanding gear / overlanding accessories10+~2,800/moCommercial1Torque specs / installation guides8+~1,500/moInformational3Branded competitor terms6+~1,200/moNavigationalSkip Correct AI’s priorities This step determines where you spend the next quarter’s content budget, so human judgment is essential. If you let an AI assistant set your content priorities based purely on search volume and intent labels, you’ll end up chasing someone else’s market instead of dominating your own. Volume is seductive, but business alignment is what drives revenue. For example, Claude clustered 323 keywords and tiered them by intent in minutes. But it assigned bumpers/skid plates (~12,000/month volume) the same priority as overlanding gear (~2,800/month) because it doesn’t know what Site Y sells. Without our human override, we may have built our content calendar around the wrong cluster. ClusterClaude’s tierCorrected tierReasoningOverlanding gear / overlanding accessories11: Core businessDirectly aligned with Site Y’s primary product line. These are the keywords that drive qualified buyers.Bumpers / skid plates12: AdjacentHigh volume, commercially relevant to the broader market, and Site Y stocks some of these products. Worth targeting through editorial/guide content over time, but not the priority sprint.Roof racks / cargo systems12: AdjacentRelated to what Site Y does, but not the core offering.Winches (for sale)12: AdjacentTransactional intent is appealing, but these are a different product category.LED light bar kits12: AdjacentRelated market, but not core inventory.Torque specs / installation guides33: AuthorityInformational content that builds topical relevance. Steady background effort.Branded competitor termsSkipSkipCan’t realistically win these anytime soon. Identify small pushes that make big differences Next, find the low-effort opportunities with the biggest payoffs. For example, from 106 weak keywords, we separated 17 close wins where Site Y already ranks in positions one through 10. These have real potential: KeywordVolumeSite Y PositionBest Competitor PositionGapoverlanding accessories1,600312 positionsoverlanding gear720312 positionsoverlanding roof rack720413 positionsoverlanding accessory kit590312 positionsoverlanding storage system390312 positionsoverland vehicle accessories320312 positionsoverland accessories260312 positionsoverlanding cargo rack210312 positions Site Y sits at position three across virtually every “overlanding” variant, while Competitor A holds position one. These are optimization opportunities. A focused push toward better on-page targeting, internal linking adjustments, and content updates incorporating “overlanding” language more explicitly could flip several of these to position one or two. That’s a different action than writing a new page. Claude would have defaulted to the latter if we hadn’t split the data into close wins and long shots. Factor in authority context As a final validation step, pull the backlink profiles for your competitors. When we did this, we found that both had relatively thin link profiles. Competitor B had 199 backlinks with an average page authority score of just 1.1 (on Semrush’s 0-100 scale), while Competitor A had 128 backlinks, averaging a 3.1 authority score. The highest quality links for both came from the same handful of overlanding and off-road vehicle publications. The most-linked pages and the top organic pages barely overlapped for either competitor. Only the homepages appeared in both lists. Competitor B’s top backlinks pointed to product pages, while its top organic traffic came from category pages. Competitor A’s best links came from editorial features, while their organic traffic was dominated by the homepage and a support page. This tells us their organic rankings are driven more by topical relevance and on-page SEO than by direct link equity to individual pages. It means the keyword gaps we identified are likely winnable through content and optimization rather than requiring a major link building campaign. Turn the gap analysis into a brief Use your competitor analysis to draft a content brief with AI. Input this prompt: Based on the gap analysis we ran, [DESCRIBE PRIORITY CLUSTER] emerged as a priority. Draft a content brief for optimizing the existing presence and/or creating a new page to capture this cluster. Include: 1. Primary and secondary target keywords (from our data only) 2. Recommended page type and format (based on what's currently ranking for these terms) 3. Content structure with suggested H2s 4. Content elements the ranking competitors include that our page should match or exceed 5. Estimated word count range based on competing content Then, in a separate section called "Differentiation: For Human Review," suggest 3 possible angles that would make this page genuinely different from what already ranks. These are suggestions for me to evaluate, not final decisions. Before finalizing the brief, cross-reference the target keywords against Site Y's existing pages export. Flag any existing pages that already rank for or target similar keywords. These are potential cannibalization risks that need to be resolved before creating new content. Rules: - Do not fabricate competitor content details. Base element recommendations on what we know from our data (URLs, page types, keyword footprints) - If you need information you don't have (e.g., actual competitor page content), say "manual review needed: [specific thing to check]" rather than guessing From this prompt, Claude drafted a clean brief with target keywords from our data, recommended format (long-form guide with product integration), and an H2 structure. It also performed a cannibalization check. Because we added a cross-reference line to the prompt, Claude flagged that Site Y already had a related page pulling 838 visits. If we’d created a new page without checking, it would have competed with the existing page. That one line in the prompt saved us from unnecessary internal competition. But the differentiation section needed human input. Only someone who knows Site Y’s brand voice and customer objections could pick the right angle from these suggested options: First-hand testing and review angle: Site Y installs and tests these products, so they can show real usage via trail tests, installation photos, and customer experiences. Comparison angle: What’s the difference between overlanding versus off-road? This directly addresses the keyword overlap we noticed in the gap data. Buyer qualification angle: Who needs overlanding gear versus who would be fine with standard off-road accessories? The experience signals (actual trail tests, customer stories, installation details) also need substantial human oversight. This is where Google’s emphasis on experience, expertise, authoritativeness, and trustworthiness meets practical execution. If you don’t have genuine first-hand experience to draw on, no amount of keyword optimization will close that gap. Run through a validation checklist Before you act on any AI-assisted competitor analysis, go through this checklist to prevent the most common errors. Data validation Base all analysis on tool exports (Semrush, Ahrefs, Screaming Frog), not AI-generated estimates. Check for export dates (if data is older than 90 days, recent algorithm updates or market shifts may have changed the picture). Use a meaningful sample size (top 50+ pages per competitor, not just top 10). Include both Pages and Positions exports. Classification validation Spot-check 10-15% of the AI assistant’s page type and topic classifications against live pages. Correct any misclassifications and re-run the comparison. Check whether AI created overly granular or overly broad categories. Verify that pages on subdomains or unusual URL structures were classified correctly. Intent validation Check intent tags (not just search volume) on all flagged opportunities. Separate commercially relevant gaps from informational and authority-building gaps. Verify intent interpretation with a manual SERP check on your top three to five priority keywords. Make a conscious decision to pursue, defer, or skip high-volume informational keywords. Prioritization validation Confirm your AI assistant’s priority ranking aligns with your business goals, not just search volume. Check whether the product or service matches what you sell if a cluster looks like tier one based on volume alone. Determine if opportunities are achievable given site authority and content resources. Confirm no opportunities are branded competitor terms you can’t realistically win. Check whether a gap is better addressed by optimizing existing content versus creating new content. Brief validation Choose a differentiation angle for AI-generated briefs (not just keywords and structure). Verify the recommended content format matches what ranks in SERPs. Confirm the brief doesn’t target keywords that your own site already ranks for. Identify E-E-A-T signals and determine what original content the page needs that AI can’t generate. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with The shift to AI-assisted SEO competitor analysis AI tools have changed where you spend your time when conducting a competitor analysis. The data gathering, clustering, cross-referencing, and initial synthesis that used to consume most of your time? AI handles that efficiently. Instead, AI assistants free up thinking time. Now, you can spend that time on the parts that determine whether your analysis leads to results: interpreting intent, validating classifications, and making strategic calls about what’s worth pursuing and what’s a distraction. View the full article
  16. The company will be using a simplified name and a new logo it says shows its unified business model, but its longstanding tagline will stay in place. View the full article
  17. The contract rate on a 30-year mortgage dropped for a third week to 6.35%, the lowest since mid-March View the full article
  18. When it pertains to comprehending customer experiences, implementing effective feedback methods is essential for businesses. Customer feedback surveys gather valuable quantitative and qualitative data, whereas in-app feedback prompts capture immediate reactions. Real-time chat integration allows for genuine insights during interactions. Furthermore, customer interviews and focus groups provide deeper qualitative insights, and social listening helps monitor online conversations. Each method contributes to a more customer-centric approach, but how can you effectively integrate these strategies for maximum impact? Key Takeaways Utilize customer feedback surveys with a mix of question types for concise, unbiased insights on user experiences. Implement in-app feedback prompts for immediate responses, increasing participation rates up to five times. Integrate real-time chat to gather genuine feedback during customer interactions, enhancing service strategies. Conduct customer interviews and focus groups to explore deeper qualitative insights into customer needs and preferences. Monitor social listening and online reviews to capture real-time insights and address customer pain points effectively. Customer Feedback Surveys Customer feedback surveys are essential tools that gather both quantitative and qualitative data about user experiences, preferences, and satisfaction levels with your products or services. These surveys serve as effective feedback collection tools, offering insights into the customer feedback process. To guarantee you collect valuable customer experience feedback, keep surveys concise, unbiased, and include a mix of closed-ended and open-ended questions. This approach encourages thorough feedback as it minimizes respondent fatigue. One popular method is the Net Promoter Score (NPS), which categorizes customers into promoters, passives, and detractors, helping you gauge loyalty and advocacy. Timing plays an important role in survey distribution; embedding feedback widgets on your website or app allows for real-time feedback collection during user interactions, leading to higher response rates. In-App Feedback Prompts How can in-app feedback prompts transform the way you gather insights from users? These prompts, consisting of micro-surveys with 2-3 questions, allow you to capture user feedback immediately after interactions. By utilizing customer feedback collection tools, you can considerably boost response rates, yielding up to 5x higher participation compared to traditional methods. Smart triggers based on user behavior guarantee that feedback requests appear at ideal moments, improving their relevance without disrupting the user experience. Here’s a quick overview of the benefits: Benefit Description Impact on Strategy Timely Insights Captures feedback right after interactions Improves customer feedback strategy Higher Engagement Increases response rates dramatically Improves customer feedback solutions Seamless Integration Fits naturally within the app Streamlines how to gather customer feedback Incorporating in-app feedback prompts is essential for grasping the importance of customer feedback and acting on feedback from clients effectively. Real-Time Chat Integration Integrating real-time chat into your customer service strategy can greatly improve the way you gather feedback. With real-time chat integration, you can collect immediate insights from customers during interactions, capturing genuine feedback as issues arise. This method allows you to implement proactive triggers that prompt customers for feedback based on their specific behaviors. As a result, you can achieve response rates up to five times higher than traditional feedback methods. Customer Interviews and Focus Groups When seeking to comprehend customer needs and preferences, interviews and focus groups provide valuable qualitative insights that quantitative surveys often miss. These methods allow you to plunge deeper into customer feedback, uncovering the reasons behind their sentiments and shaping your product roadmap accordingly. Gain a nuanced comprehension of customer perceptions. Encourage collaboration and shared insights through group discussions. Create a more responsive, customer-centric culture. Social Listening and Online Reviews Social listening and online reviews are vital components of modern customer feedback strategies, as they provide valuable insights into customer opinions and behaviors. By monitoring social media conversations, you can gain real-time feedback on customer sentiment and emerging trends. Engaging with online reviews is equally important, as 93% of consumers say these reviews influence their purchasing decisions. Responding to both positive and negative feedback builds brand trust, as 70% of consumers expect brands to acknowledge their reviews. Method Benefits Social Listening Real-time customer insights Online Reviews Influence on purchase decisions Customer Feedback Identify pain points Customer Retention Improve overall experience Utilizing social listening tools and analyzing online reviews reveals common themes, allowing you to address specific concerns, improve the customer experience, and eventually enhance customer retention. Frequently Asked Questions What Is the 10 to 10 Rule in Customer Service? The 10 to 10 rule in customer service emphasizes responding to customer inquiries within ten minutes and ensuring a resolution or follow-up within ten hours. This approach nurtures quick engagement, which improves customer satisfaction and retention rates. By prioritizing timely communication, you streamline support processes and build trust with your customers. Implementing this rule not just enhances the overall customer experience but additionally positions your business favorably against competitors who may not prioritize swift responses. Which Tool Is Most Effective in Gathering Customer Insights? To gather customer insights effectively, consider using real-time feedback tools like Zendesk or Drift. These platforms integrate with your support operations, capturing customer input immediately after interactions. Furthermore, in-app surveys offered by tools such as Intercom can greatly increase response rates, providing timely data. For organized feedback management, platforms like UserVoice streamline feature requests, allowing you to prioritize based on user impact, ensuring you act on insights efficiently and effectively. What Is the Most Immediate Way to Gather Customer Feedback? The most immediate way to gather customer feedback is through real-time methods like in-app surveys or live chat integrations. By triggering these feedback requests during user interactions, you capture authentic reactions as their experience is fresh. Contextual micro-surveys that focus on specific user actions can greatly increase response rates. Furthermore, automated follow-ups after live chat sessions allow customers to provide instant feedback, enhancing your ability to address issues swiftly and effectively. What Are the 3 C’s of Customer Satisfaction? The 3 C’s of customer satisfaction are Consistency, Communication, and Customer Experience. Consistency guarantees you deliver the same high-quality service across all touchpoints, which builds trust. Communication involves actively listening to your customers, responding quickly, and addressing their feedback, enhancing their perception of your brand. Finally, Customer Experience encompasses every interaction a customer has with your business, where positive experiences can considerably boost loyalty and retention rates. Focusing on these three elements is essential for success. Conclusion Incorporating these five effective customer feedback methods can greatly improve your comprehension of customer experiences. By utilizing customer feedback surveys, in-app prompts, real-time chat, interviews, and social listening, you can gather valuable insights that drive improvements. These approaches not just boost response rates but additionally cultivate a more customer-centric culture within your organization. In the end, leveraging these strategies will help you improve customer satisfaction and retention, ensuring your business remains competitive and responsive to changing needs. Image via Google Gemini This article, "5 Effective Customer Feedback Methods for Instant Insights" was first published on Small Business Trends View the full article
  19. Previous bid for FTSE 100 group from Swedish private equity firm rejectedView the full article
  20. AI search is caught in a self-reinforcing loop, where synthetic content feeds retrieval systems that present it back as fact. The post AI Search Is Eating Itself & The SEO Industry Is The Source appeared first on Search Engine Journal. View the full article
  21. We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication. The Samsung Galaxy S26 is down to $799.99 for the 256GB unlocked version, a drop from $899.99 and its lowest price so far, according to price trackers. This is Samsung’s smallest flagship for 2026, and it leans into that idea of giving you most of the high-end experience without the size or cost of the Ultra model. The design feels familiar if you have used a Galaxy before, and is relatively compact at 6.3 inches, so it sits comfortably in one hand without feeling cramped. It also comes with an IP68 rating for dust and water resistance. Samsung Galaxy S26 Unlocked Android smartphone (256GB, black) $799.99 at Amazon $899.99 Save $100.00 Get Deal Get Deal $799.99 at Amazon $899.99 Save $100.00 It runs Android 16 with Samsung’s One UI 8.1, and it is set to receive seven years of updates, which is still one of the longest support windows you will find on an Android phone. Performance is not a concern here—the Snapdragon 8 Elite Gen 5 processor for Galaxy keeps everything fast, whether you are jumping between apps, editing photos, or playing games. Plus, it has a bright and sharp display (with a 120Hz refresh rate) that holds up well outdoors. Samsung’s newer AI tools are built-in, too—you can edit photos using text prompts, clean up document scans, or get suggestions through features like Now Brief. That said, its battery life is average, with just over 15 hours of video streaming, according to this PCMag review. The triple-camera system, with a 50MP main sensor, 12MP ultrawide, and 10MP telephoto, delivers solid results in most conditions. Photos look natural, and low-light shots benefit from a brighter main sensor, though you may notice some softness compared to the Ultra model. The camera module also causes a slight wobble when the phone is placed flat, which is common but still noticeable. For most people, though, the S26 covers the basics quite well—delivering strong performance, a bright display, and capable cameras in a form factor that is easier to handle than most flagship phones. Our Best Editor-Vetted Tech Deals Right Now Apple AirPods Pro 3 Noise Cancelling Heart Rate Wireless Earbuds — $199.99 (List Price $249.00) Blink Video Doorbell Wireless (Newest Model) + Sync Module Core — $35.99 (List Price $69.99) Ring Indoor Cam (2nd Gen, 2-pack, White) — $59.98 (List Price $79.99) Apple Watch Series 11 [GPS 46mm] Smartwatch with Jet Black Aluminum Case with Black Sport Band - M/L. Sleep Score, Fitness Tracker, Health Monitoring, Always-On Display, Water Resistant — $329.00 (List Price $429.00) Apple iPad 11" A16 128GB Wi-Fi Tablet (Silver, 2025) — $319.99 (List Price $349.00) Deals are selected by our commerce team View the full article
  22. The job market is tough right now. According to the Bureau of Labor Statistics, job openings have been trending down, and are currently below pre-pandemic levels. In a hypercompetitive economy, people entering the workforce are facing fewer opportunities than just a few years ago. And for the 1 in 3 American adults with a justice-involved past, or any interaction with the criminal justice system as a defendant, their record is another obstacle in an already challenging job search. April marks Fair Chance Month, an annual opportunity to spotlight reentry programs, resources, and skills-training for formerly incarcerated people. Yet, as the conversation around second chance hiring has expanded each year, a criminal record can still reduce a candidate’s chances of a second interview by 50%. Even when people with justice-involved pasts take advantage of every opportunity, exclusionary hiring practices and systemic barriers make finding and retaining employment an uphill battle. For example, returning citizens frequently have trouble securing safe and reliable housing and transportation, and are therefore 10 times more likely to experience homelessness than the general public. When we systematically exclude people from employment because of a checked box, we’re not just denying them jobs, we’re denying them the foundation they need to rebuild their lives. BREAK DOWN BARRIERS Second chance hiring practices can—and should—be tailored to each company’s unique needs and challenges, but they have the potential to benefit any industry. Across industries and sectors, 85% of HR professionals and 81% of business leaders individuals with justice-involved pasts perform the same as, or better than employees without. This reinforces the value second chance hires can bring to the company. At Frontier Co-op, we’ve seen firsthand the tangible impact second chance hiring can make on a community. We implemented our flagship Breaking Down Barriers to Employment program in 2018 to take a more holistic approach to addressing employment barriers. It involves adopting second chance hiring practices and working with a local nonprofit partner to provide access to comprehensive wraparound services. Internally, we provide subsidized childcare options, transportation, and an apprenticeship and skills training program. Most recently, we launched a savings match program to support our workforce’s long-term resilience. We’ve seen how this has grown our workforce, as more than 25% of Frontier Co-op’s production hires in the last year were justice-involved individuals. While anonymity is critical to the program’s success, one employee—Alisia Weaver—has chosen to share her story. She began as an apprentice and has grown into her current role as a machine operator. She will celebrate her sixth anniversary this fall. As an important part of our co-op’s advocacy in this space, Alisia offers her perspective on the impact second chance hiring has had on her life and future. “This experience has helped me advance in all aspects of my life. I have my own place, a vehicle, and daycare for my son. I’ve come forward to tell my story because I just want to encourage people and inspire them not to give up, no matter what setbacks they face,” she said. “I also want to encourage companies to try something different and consider adopting second chance hiring practices. It could be beneficial for you, but it could also change someone’s life.” RETHINK YOUR HIRING PRACTICES By embracing candidates with diverse backgrounds and perspectives, we’ve seen how this approach strengthens the resilience of both our workforce and our business. Most meaningfully, it has shaped our culture in lasting ways. Over the years, many employees have stopped me to share how proud they are of our commitment to fair hiring. So many people know or love someone who has been held back by a justice-involved past, and it matters to them to see their employer offering people a truly fresh start. But we can’t make these changes in silos. As a second chance employer, we’re proud to partner with organizations like the Responsible Business Initiative for Justice (RBIJ) and REFORM Alliance, which are leading the change and helping businesses remove barriers and create career opportunities for these individuals, to ensure a more inclusive workforce for all. “Businesses play a crucial role in keeping communities safe and healthy,” said Maha Jweied, RBIJ’s CEO. “Hiring justice-impacted job seekers can break cycles of incarceration, revitalize neighborhoods, and forge pathways for people to reach their potential—and that includes those with past convictions. By prioritizing inclusive hiring, we not only demonstrate our commitment to the communities we belong to, but also enhance our organizations with capable, dedicated, and resilient talent.” We know we can’t hire everyone regardless of their past, and we don’t view this program as a rehabilitation process. Our intent is simply to eliminate a bias that could negatively impact good candidates along the hiring journey. That’s something we think every organization and company can aim to do. This Fair Chance Month, I’d challenge all business leaders to take a moment to think a little differently—a little critically—about their hiring processes. Set aside time for an open, internal conversation about whether criteria related to justice involvement may unnecessarily be limiting candidate consideration. Reach out to a colleague who is doing this work to hear more about their experience, ask candid questions, and understand the challenges they’ve navigated. My door is always open. Tony Bedard is CEO of Frontier Co-op. View the full article
  23. Plug-in solar is on the way, and it could cut your electric bills. A growing number of states are poised to pass bills supporting the panels, which are designed for DIY installation: Hang one out a window or set it on a deck, plug it into a regular outlet, and power starts flowing back into your home. A new calculator helps you estimate how much you can save on power bills, using your zip code to estimate how much sunshine you get and how much you’re paying for electricity now. The tech could be especially useful in cities like New York, where renters have steep electric bills and don’t have roofs to install traditional solar panel systems. “A huge percent of this country is composed of renters,” says Cora Stryker, cofounder of Bright Saver, a nonprofit that advocates for the technology and just released the calculator. “What are you supposed to do? I mean, it’s really a powerless feeling—pun intended—to see your energy bills just spike and not be able to do anything about it.” Homeowners who don’t want to invest in a full rooftop system can also use plug-in panels. Designed for self-installation, they avoid the costs of permitting, inspections, hiring an electrician, and the marketing expenses of solar companies, which together make up nearly half the price of traditional systems. “The reason this is a game changer is we’re taking all those extra costs out, and we’re delivering the dirt-cheap cost of the technology to consumers so they can install it themselves,” Stryker says. “It’s pushing us toward a tipping point. For years now, clean energy has been cheaper to produce than fossil fuel alternatives. However, for the consumer that is not true. This is the beginning of that.” Plug-in solar panels, also known as balcony solar, became widespread in Germany when electricity bills surged because of Russia’s war in Ukraine; their use continues to grow throughout Europe. (In Germany, they’re so common that you can buy them at Ikea.) In the U.S., regulatory hurdles are beginning to fall. Right now, though the panels aren’t illegal, they require a complicated process of approval from utilities. But states are beginning to change that. Utah was the first to pass a law supporting the tech last year, exempting consumers from the need to get approval from utilities. Maine followed this month. Bills also passed in Colorado, Maryland, and Virginia and are awaiting signatures from governors. More than 20 other states are now considering bills, from both Republican and Democratic lawmakers. Some utilities have argued that the devices pose safety risks, but advocates say that years of use in Germany have proven that they’re safe. UL Solutions, the standards organization, is currently working on certifying devices to a new safety standard that was created at the beginning of the year, though Stryker says devices on the market in Utah meet existing standards. The panels come in different sizes, ranging from around 400 watts to 1.2 kilowatts, and cost between $400 and $2,000. A small panel could cover the power used by a full-size fridge. An 800-watt system could cover that along with a TV, lights, and other small equipment like routers. “It’s most meaningful for your background electricity demand, meaning what is running all the time,” Stryker says. It’s not like a whole rooftop system, which could power your entire house. But it can still make a noticeable difference on your electricity bills. In New York City, for example, someone using a 1,200-watt panel on an apartment balcony could potentially save $339 in a year. (It’s worth noting that the calculator doesn’t attempt to include whether the panel is facing south or how much other buildings might be shading it.) In Oakland, California, someone with the same panel could potentially save $491 because of the sunnier weather. The devices could help people who are struggling the most to afford electric bills, especially low-income renters. “Because electricity varies so much in cost, it really becomes an equity issue,” Stryker says. “The people living in the densest parts of the United States have the highest electricity [rates] almost universally.” View the full article
  24. Samsung's One UI software for its Galaxy phones comes packed with features and functionality, but there are also several official extra apps made by Samsung that don't come preinstalled on its phones—and they're well worth checking out. I've already written about the various Good Lock plug-ins—which let you build your own keyboards and set separate volume levels for individual apps—but that's not all there is to explore when it comes to additional apps. There's also Galaxy Enhance-X, a tool for polishing and improving your photos and videos, as well as manipulating digital documents. Enhance-X can do everything from applying cinematic filters to pictures, to scanning in documents and translating them at the same time, and it's free to install and use. It's also just been given a major revamp, with Samsung cleaning up the app's interface as well as adding some additional features. If you use a Samsung phone, you can get Enhance-X from the Galaxy Store. Learning the basics in Enhance-XThere are now three tabs to work with in Enhance-X, part of the recent app interface revamp: Plug-ins, Home, and History. The Plug-ins tab is a good place to start, because it shows off some of the app's capabilities: Tap the download icon (the downward arrow) on FilmStyle to access nine extra filters for your pictures. These filters and many more effects can be applied to your photos and videos from the Home tab. This tab is essentially a file picker—you can select one or more photos and videos to work with. To switch to the standard Gallery app view (complete with albums and collections), tap the flower-style icon in the top right corner. Enhance-X comes with optional plug-ins. Credit: Lifehacker Pick one or more images, and you can choose between Photo tools and Doc tools (for scans) at the bottom; if you're selecting videos, there's just the Video tools option. That then takes you into the full editing interface, where you can see everything Enhance-X has to offer (including the FilmStyle filters). Use the icons at the bottom of the screen to browse through the tools, which are typically one-tap enhancements that the app will configure itself. There's Colorize for adding color to black and white photos, for example; HDR for boosting dynamic range; and Fix blur for images that aren't quite sharp enough. HDR is one of the color customization options. Credit: Lifehacker Many of these options are useful quick fixes, but there are some fun tools as well. Tap Creative then 24-hr time lapse, and you can turn any image into a short video—nothing in the image will move, but the colors will shift as if you're seeing the picture go through a full night-and-day cycle. Some of the tweaks available will vary depending on the type of image or video you've selected. Pick a portrait shot for example, and you get access to the Face tool—this gives you sliders for adjusting the smoothness and tone of the facial features, and you can adjust the strength of each effect individually. Exploring more Enhance-X featuresIf you pick Film style filters from the Suggested tab when editing a picture, you can try out the filters we downloaded earlier. Use the thumbnails to browse between the different effects and see how they work—if you tap the small "i" button to the left you get a useful rundown of what each filter does and which types of images it works best with. Over on the video tools side, you've got options like Slow mo. This presents you with a timeline of your video, and if you press and hold at any point in that timeline, Enhance-X adds a special slow-motion effect. The app lets you preview changes before applying them. Credit: Lifehacker There are also simple trimming tools for your video clips, as well as a Single take section where you get to play around with effects like rebound (which creates a video that can loop infinitely) and highlights (which picks out the best parts of the video). Each effect can be previewed on screen before saving. For documents scanned as photos, there are a host of different options. You're able to apply crops, filters (to add or remove color), text, and scribbled highlights; you can combine different scans together in one document; and you can remove any unwanted scanned elements (like fingers). There are many different actions you can take on scanned documents. Credit: Lifehacker Choose Add text, for example, and you get the option to drop a text box right on top of your scan, with settings for font size, style, and color. Whether you need to add annotations or correct mistakes on the original document, it's straightforward and intuitive to use, and means you don't have to call up a separate app or start editing on a desktop interface. Head to the History tab to review all your edits and undo them if necessary. Enhance-X is something I've kept on my Galaxy phone ever since I discovered it, and it's often come in use for edits that it can do more quickly than other apps or that other apps can't do at all—including the apps that actually come with One UI. View the full article
  25. As artificial intelligence integrates deeper into our workflows, understanding its vulnerabilities is critical. A recently exposed vulnerability known as Best-of-N (BoN) jailbreaking has redefined how we view AI safety. Here’s a breakdown of BoN jailbreaking, how the attack works, and why it creates real risk for your data, brand, and the AI tools you rely on. First, a quick vocabulary check Before getting into BoN, there are two terms you need to actually understand, not just nod at. Brute force attack: Imagine trying to crack a four-digit PIN by starting at 0000, then 0001, then 0002, all the way to 9999. No cleverness, no strategy, just trying every single combination until one works. That’s brute force. It’s dumb, slow, and works disturbingly often if nobody stops it. Stochastic: This just means random, or more precisely, probabilistic. AI models are stochastic because they don’t produce the exact same output every time you ask the same question. There’s built-in variability in how they generate responses. That’s by design. It’s what makes AI feel less robotic. It’s also a liability. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with What is Best-of-N jailbreaking? BoN is brute force, but smarter. Instead of trying every possible combination from scratch, it exploits the built-in randomness of AI models. The logic is simple: if an AI gives slightly different answers every time, and some of those answers slip past its own safety rules, then the attacker just needs to ask enough times, in enough slightly different ways, until one version of the question gets the forbidden answer through. That’s not just a technical edge case. It means safeguards can be bypassed at scale, with direct implications for how your team uses AI tools every day. The research behind this technique describes it as a “simple black-box algorithm.” Black-box means the attacker doesn’t need to see inside the model. No access to the code, no insider knowledge required. They’re working from the outside, just like any regular user would. Think of it like a kid asking for candy when you’ve already said no. The first “no” doesn’t stop them. They rephrase, change their tone, ask at a slightly different moment, and try from a different angle. They ask another adult or wear you down, not by finding a magic phrase, but by generating enough variations that eventually one lands at the exact moment your patience runs out. BoN is that kid, automated, running thousands of variations per minute. How the attack works — and how easy it is to set up This is the part that should make you uncomfortable, because it shows how little effort it takes to turn this into a real-world risk. The setup isn’t sophisticated. Step 1: Augmentation The attacker takes a forbidden prompt, something the AI is trained to refuse, and generates hundreds or thousands of variations. Not clever rewrites, just noise: random capitalization (HoW Do I…), scrambled characters, inserted typos, and meaningless filler tokens. Ugly, broken-looking text that a human would immediately recognize as weird, but that an AI processes token by token. Step 2: Bombardment All those variations get sent to the model simultaneously, or in rapid succession, using a simple script. This isn’t a complex operation. Anyone with basic Python knowledge and access to an API can automate this. The compute cost is low. The barrier to entry is lower than most people assume. Step 3: Selection An automated grader, often just another LLM, scans all the outputs and flags the one response that bypassed the safety filter and delivered the restricted content. The attacker doesn’t read thousands of responses. The second AI does the screening for them. That’s the full attack. No special hardware, no insider access, and no advanced degree in machine learning. Get the newsletter search marketers rely on. See terms. The numbers behind BoN The original research clocked an 89% attack success rate on GPT-4o and 78% on Claude 3.5 Sonnet when running 10,000 augmented prompt variations. With just 100 variations, Claude 3.5 Sonnet still failed 41% of the time. This didn’t quietly fade into the research archives when the models got updated. It was presented as a poster at NeurIPS in December 2025. NeurIPS is the most prestigious machine learning conference in the world. And the attack has only gotten faster. Newer BoN-based techniques can now achieve comparable success rates while cutting the time to attack from hours to seconds. Meanwhile, OWASP, the gold standard for cybersecurity risk rankings, listed prompt injection, the category BoN falls under, as the No. 1 vulnerability in their 2025 LLM Top 10. The success rate also follows a predictable power-law curve, meaning attackers can mathematically forecast how many attempts they need before they break through. Forget luck, we’re talking about a calibrated, scalable operation. BoN also works across all modalities: text, images (change the font, background, and color), and audio (adjust pitch, speed, and background noise). Every format and frontier model tested. Why it’s a marketing and branding problem Cybersecurity and marketing used to be separate conversations. AI collapsed that boundary and put brand risk directly inside your AI workflows. Safety filters are porous, not protective The research is unambiguous: given enough augmented attempts, some will get through. This applies to every AI tool in your stack, whether it’s internal, customer-facing, or embedded in your content workflows. Your prompt inputs carry legal risk When your team pastes a client brief, a competitor’s ad copy, or licensed third-party content into a prompt to “get AI help,” you’re introducing material that could later be extracted. BoN jailbreaking demonstrates that copyrighted content can be physically retrieved from model weights under the right conditions. If an AI can reproduce verbatim text when sufficiently probed, that content is encoded in there. The safety filter was the only thing standing between it and the output. Brand exposure through your own AI tools If someone uses BoN to jailbreak an AI tool your brand has deployed, a customer chatbot, or a content generation tool and extracts harmful, offensive, or legally compromising output, the story doesn’t start with “AI was jailbroken.” It starts with your brand name. You know this, journalists know this, and social media content creators know this. Attack composition makes this worse BoN doesn’t operate alone. Combining it with a “prefix attack,” a carefully crafted phrase attached to the start of each prompt, boosted success rates by an additional 35% while requiring fewer attempts. The technique actively evolves toward greater efficiency. What you should do now Audit what goes into your prompts Treat prompt inputs with the same sensitivity you’d apply to data under GDPR. Licensed content, client briefs, proprietary information — none of it belongs in a third-party AI tool without a clear data policy from the vendor. Stop treating safety filters as compliance If your AI vendor says the model is safe and that settles it for you, you’ve outsourced your risk assessment to the party that profits from minimizing it. Output monitoring, anomaly detection on request volume spikes, and continuous red-teaming are due diligence. Understand that the attack surface spans every modality Text, image, and audio. BoN applies across all of them. If your brand uses any AI-powered tool that handles user inputs in multiple formats, the vulnerability applies. Log everything Prompts in, outputs out. If an incident happens, legal will ask what the model was given and what it produced. Without logs, you have no defense and no evidence. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with What BoN jailbreaking reveals about AI safety limits The same built-in randomness that makes AI useful for creative and marketing work makes it exploitable at scale. BoN jailbreaking is an active, validated, and accelerating threat that the cybersecurity community is racing to defend against. Most marketing teams haven’t yet priced in the brand, legal, and reputational stakes. The ones that do first will build defensible practices before they need them. The rest will learn it through an incident they didn’t see coming, and won’t be able to explain after the fact. View the full article
  26. If you’ve been building consumer hardware for any real amount of time, you know the pattern. Most of these shifts start the same way. The sensor exists, but it’s stuck in clinical settings where it’s expensive, awkward, and not something anyone would realistically use day to day. At some point, someone figures out how to shrink it down enough to fit into a real product, and a few companies take an early shot at turning it into something people actually want. Early on, it’s easy to dismiss. It looks niche, maybe even like a gimmick. But adoption starts to build, usually more gradually than people expect at first. Then it picks up, and within a product cycle or two, it stops feeling optional and just becomes part of the baseline. That’s typically the point where it becomes clear who planned for it and who didn’t. And if you didn’t, you’re trying to retrofit something fundamental into a product that wasn’t designed for it. In almost every case, most of the market waits. Not for the technology but for validation from a small set of industry leaders. By the time that signal arrives, the category is already defined, and the leaders are already ahead. Heart rate monitoring is the textbook case. Electrocardiography has existed since the early 1900s. For decades, continuous heart rate data meant a clinical setup or, at a minimum, a chest strap and a willingness to look like you were under house arrest while jogging. Then optical sensors got small and cheap enough to sit on a wrist. Polar shipped the first wireless heart rate monitor in 1977, and it was built for elite Finnish cross-country skiers, not for everyday users. For a long time, that kind of data stayed in that world, or at least required gear most people wouldn’t bother with. Then Fitbit brought heart rate into a simple wristband, Apple built it into a watch, and it gradually became part of how people expected these devices to work. At this point, it’s hard to imagine a fitness product without it. What used to feel specialized is now just assumed. The entire category was reorganized around a sensor that used to require a hospital visit. What’s easy to forget is that consumers didn’t ask for this. Apple and the companies that followed turned heart rate into a requirement before most people knew why it mattered. Once it was there, it became unthinkable to ship without it. Enter Brain Sensing Brain sensing will follow the same path. The first companies to integrate it won’t be responding to demand so much as shaping it. And once users experience products that adapt to their cognitive state, going back will feel like a downgrade. Active noise cancellation did the same thing to headphones. Bose had the science for years, originally developed for aviation, before Sony and Apple turned it into a consumer expectation that redrew the entire competitive map in premium audio. If you were making $300 headphones without ANC by 2020, you weren’t in the conversation. The companies that waited didn’t lose because the tech was unclear they lost because they waited for confirmation. We’re seeing this now in the age of AI. Google invested heavily into AI research for years, improving internal processes and products with LLMs since the late 2010s. It wasn’t until ex-Google employees came up with the idea to launch a chatbot (in the form of ChatGPT) that AI became a mainstream term (and prompted Google’s famous “code red” initiative at the end of 2022). The technology didn’t suddenly appear, but the shift in market perception forced everyone else to react. What’s worth noticing is that in every case, the underlying technology was well understood long before anyone productized it. Science wasn’t the bottleneck. The engineering was shrinking the sensor, solving the noise problem, making the experience seamless enough that a normal person never thinks about the technology underneath. That’s exactly where we are right now with brain sensing. And the product category it’s going to hit first is everything worn on or around the head. What’s taking so long? Which raises a reasonable question: if the brain is the most important organ we have, why hasn’t anyone turned brain data into a consumer standard already? Electroencephalography EEG has been measuring the brain’s electrical activity since 1924. Hans Berger, a German psychiatrist, captured the first recording of human brainwaves almost exactly a century ago. Since then, EEG has become one of the most widely used measurement tools in clinical neuroscience. It’s standard in hospitals for diagnosing epilepsy, evaluating traumatic brain injuries, studying sleep disorders, and flagging early markers of neurodegeneration. This is not emerging science. This is established, validated, battle-tested science that has been sitting there waiting for someone to solve the product problem. The limitation was never understanding the brain; it was making the technology disappear into a product people would actually use. The basics: your brain emits tiny electrical signals every time neurons fire in coordinated patterns. Just like EKGs pick up the electrical pulses from your heart, EEG detects the electrical pulses from your brain. The best part? It’s completely noninvasive. The user doesn’t feel a thing. And when you process those signals well, they tell you a surprising amount about how someone’s brain is actually performing in real time. So why has it taken a hundred years for this to land in a consumer product? Because three hard engineering problems were stacked on top of each other, and until recently, no one had solved all three. The sensors were a nonstarter for consumers. Clinical EEG uses wet electrodes, metal discs that need conductive gel, a skilled technician, and a setup process that takes 20 to 45 minutes (or more). The caps can run anywhere from 64 to 256 electrodes wired across the scalp. Outstanding data. Zero chance anyone’s doing that before their Monday standup. What changed is material science. Soft, dry, conductive fabric sensors can now capture EEG signals from the skin on the head, around or in the ear with enough fidelity to produce research-grade data. They integrate directly into the ear cushions of headphones, so the form factor and comfort stay the same, and the user doesn’t have to think about them at all. Brain signals are absurdly quiet. I mean absurdly. We’re talking microvolts, one millionth of a volt. A single jaw clench can generate electrical noise orders of magnitude louder than the brain signals you’re trying to read. In a controlled lab, you can manage that. In the real world, where your customer is walking through an airport or grinding their teeth during a Zoom call, the signal-to-noise ratio is a nightmare. This is where AI earned its keep, and I mean years of earning it, not a model someone fine-tuned over a weekend. Machine learning systems trained on thousands of hours of real-world brain data from thousands of users can now isolate neural activity from muscle artifacts, electrical interference, and movement noise, in real time, on compact hardware. Some of these models have been validated through work with the Department of Defense and partnerships with clinical institutions. The signal processing is the moat. It’s what separates legitimate consumer EEG from the wave of pseudoscience wearables that have come and gone over the past decade, and there have been plenty. It had to be invisible. The final step The last piece is pure product engineering. EEG systems that once needed dedicated amplifiers and bundles of wires now run on the same Bluetooth chips and battery budgets as premium noise-canceling headphones. Multi-channel EEG, 250 to 500 Hz sampling rate, wireless data transmission all inside an ear cup, with enough juice left to maintain typical battery life. The user puts on headphones. The brain sensing just happens. What matters is that these three breakthroughs compounded. Better sensors generated cleaner data. Cleaner data trained better models. Better models meant you could extract more signal from fewer, smaller sensors. That flywheel is what finally moved brain sensing from “technically possible in ideal conditions” to “shipping in consumer hardware.” In other words, this is no longer a research problem. It’s a product decision. If you’re evaluating this for your roadmap, this is where things tend to matter most , because overclaiming is rampant in this space, and it erodes trust fast. Consumer-grade EEG has been validated in DoD-reviewed research and in real-world deployment for detecting changes in cognitive state over time. The brain’s electrical oscillations fall into well-characterized frequency bands (delta, theta, alpha, beta, gamma), and the relative power across those bands shifts in predictable ways with different mental states. That’s the foundation. In practice, a few applications are already reliable today: Focus and attention detection is the most robust application. Distinguishing sustained concentration from mind wandering, backed by substantial published research. Being able to proactively recommend an intervention when focus starts to drop, in some cases, hours before they’d normally take a break. Cognitive fatigue detection identifies declining mental performance before the person subjectively notices it. This has been validated across populations from office workers to military personnel, and it’s one of the most immediately useful applications for product integration. Imagine your earbuds coaching you through the last mile of a long race when they detect your cognitive resources need it most. That’s the kind of differentiator this technology can enable. Cognitive load estimation is how hard the brain is working on a given task. Relevant for UX research, adaptive interfaces, gaming performance, and workplace optimization. Crucial across military, driver, and pilot use cases to pull someone out before accidents happen. Longitudinal brain health trends track shifts in baseline brain activity over weeks and months. These patterns correlate with sleep quality, stress levels, and aging. The research on whether they can serve as early indicators of neurological change is promising but still maturing. It’s worth watching closely, but it would be irresponsible to overstate where the science is today. What makes this different from earlier biosensors is how the data gets used. Heart rate data PPG is retrospective. It tells you what has already happened to your body. EEG is real-time and bidirectional. The system detects a shift in your cognitive state and responds to it immediately. That’s not a subtle distinction. It’s the difference between a dashboard that tells you what already happened and a system that actively changes with your performance in real time. The closed-loop potential, where the product adjusts audio, pacing, content, workload, or alerts based on live brain state, is the innovation that makes this genuinely new territory. No previous consumer sensor has enabled this. The limits Now, what EEG does not do: it does not read thoughts. It does not decode what someone is thinking about. It measures how the brain is performing, not what it’s processing. The applications right now are wellness and performance, not clinical diagnosis. That line matters, scientifically and regulatorily, and any partner worth working with will be clear about it. If someone tells you their EEG can do more than this today, ask to see the published validation data. The credible players in this space welcome hard questions. The others deflect them. If you’re running a product org for headphones, gaming headsets, earbuds, AR glasses, helmets, hearing aids, or anything head-worn, the integration math looks like this: The physical footprint is smaller than most people expect. Unobtrusive, comfortable sensors embedded in existing ear tips or cushion form factors. A firmware layer handling signal acquisition and transmission. A software platform is doing the processing. If your product already makes contact with the skin in or around the ear or on the head, you’re working with a compatible starting point. The industrial design disruption can be minimal, the sensors are invisible to the end user, and you’re not asking your customers to do anything differently. You also don’t have to build a neuroscience team. The technology stack, sensors, firmware, signal processing, AI models, and app infrastructure are licensable. Think about the model Qualcomm established for mobile connectivity or what Dolby did for audio processing. Deep technology, integrated into your product, without requiring a decade of R&D you haven’t done. The hard years of data collection, algorithm training, and clinical validation already happened. You’re buying the outcome, not the journey. And what most hardware companies miss: this isn’t a feature add. It’s a new computing layer, one with a roadmap that compounds over time, and with revenue models that pure hardware doesn’t support: subscriptions, premium tiers, enterprise licensing, data partnerships. The companies integrating now aren’t just acquiring a sensor. They’re taking a position in a platform that’s still being built, at a moment when that position is still available. And the feature set is meaningful and available today. Focus tracking, fatigue detection, cognitive health insights, personalized performance coaching, and brain break prompts. Devices with these features are already shipping, and early data shows two out of three users reporting measurable improvements in daily focus. That’s the kind of engagement metric that supports premium pricing and retention. This is also just the baseline. New biomarkers and applications are in active development, sleep biofeedback is already in the pipeline, and the platform roadmap keeps expanding as more real-world data gets collected. How much that matters will depend on whether you’re in a position to take advantage of it. The gaming wearables market is projected to grow from $5 billion to nearly $20 billion by 2034. The BCI market overall is expected to exceed $52 billion globally in the same timeframe. Brain sensing headsets are already winning “Best of CES” awards. This isn’t a niche technology looking for a market. The market is forming in real time. A compounding advantage One part that doesn’t get discussed enough is that brain data has a compounding advantage. The companies that start collecting it first build better models. Better models attract more users. More users generate more data. That flywheel is extremely difficult to replicate once a competitor has a multi-year head start on it. If you’ve watched what happened with fitness data ecosystems, how hard it is to switch away from a platform that has years of your health history you understand why the early mover advantage here isn’t just about features. It’s about the data layer underneath. At this point, it’s less about whether this works and more about whether you’re early enough to matter. If I were sitting in a product review evaluating whether to pursue brain sensing integration, the questions I’d focus on are: On integration, what’s the BOM impact? What changes in my existing ID? What does sensor contact look like across different head shapes and hair types? What happens when contact is bad? Does the system fail silently, throw errors, or degrade gracefully? On the platform, what does the user see, and through what interface? How much processing happens on device versus in the cloud? How is sensitive brain data protected? What’s the privacy architecture? (This one is non-negotiable, and regulators are already circling Colorado, which passed the first state privacy act that explicitly includes neural data as protected information.) On the business, what’s the evidence on willingness to pay for cognitive features? Which verticals are moving fastest? What does the regulatory landscape look like if I want to make wellness claims versus health claims? A good partner has clear answers to all of these. If they’re hand-waving on any of them, you’re in the wrong conversation. Heart rate monitoring existed for a century before it became a consumer standard. Active noise cancellation sat in aviation for decades before it redefined headphones. AI-supported internal products and infrastructure at Google for nearly a decade before chatbots were widely adopted. In all cases, science was never the holdup. The product packaging was. And in all cases, the companies that moved early didn’t just have a feature advantage, they defined what the category became. Brain sensing is on this same path. The science is validated. The engineering is solved. The form factors are ready. The first products are shipping and winning awards. At this point, it mostly comes down to timing and whether you’re early or playing catch-up. You’ve watched this exact pattern play out before. You know how it ends for the companies that wait. View the full article
  27. Cracks appear in cabinet following evidence from sacked head of Foreign OfficeView the full article




Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.