Skip to content




All Activity

This stream auto-updates

  1. Past hour
  2. From Experiences to Transformations: The Future of Value Creation. with Rory Henry The Holistic Guide to Wealth Management Go PRO for members-only access to more Rory Henry. View the full article
  3. From Experiences to Transformations: The Future of Value Creation. with Rory Henry The Holistic Guide to Wealth Management Go PRO for members-only access to more Rory Henry. View the full article
  4. "We’re growing in a way that is strategic, and that we’re preparing our people to meet the demands of that growth.” MOVE Like This With Bonnie Buol Ruszczyk For CPA Trendlines Research Go PRO for members-only access to more Bonnie Buol Ruszczyk. View the full article
  5. "We’re growing in a way that is strategic, and that we’re preparing our people to meet the demands of that growth.” MOVE Like This With Bonnie Buol Ruszczyk For CPA Trendlines Research Go PRO for members-only access to more Bonnie Buol Ruszczyk. View the full article
  6. You can now do in 20 minutes what used to take a full afternoon. Feed two Semrush exports into Claude or ChatGPT, and you’ll get a polished competitor analysis – complete with topic clusters, gap tables, and prioritized briefs. The output looks convincing. The tables are clean. The recommendations sound confident. That’s the problem. AI can organize and summarize data quickly, but it can’t make strategic decisions. Without the right workflow, prompts, and validation, you risk acting on insights that sound right but lack depth. Used correctly, though, AI can surface meaningful patterns – revealing differences in topical depth, content coverage, and authority signals that influence search visibility. Here’s a walkthrough of a real two-competitor analysis using Claude and Semrush data, showing how to turn fast AI outputs into a reliable strategy. You’ll get a repeatable workflow, tested prompts, and a validation checklist to catch common mistakes, along with a clear sense of where to trust AI — and where to rely on your judgment. AI won’t run a competitor analysis for you. But it can compress the manual work — clustering, pattern matching, and synthesis — so you can focus on interpreting intent, validating opportunities, and deciding what’s worth pursuing. Note: The sites in this analysis are real but anonymized. Site Y is our client, while Competitors A and B are direct competitors in the same niche. The data is from real Semrush exports pulled in early 2026. Start with data, not a prompt Whenever possible, start by exporting data from your SEO tool. Don’t ask an AI assistant to guess what an SEO tool can tell you. Otherwise, you assume your AI assistant is a measurement tool. Although it isn’t, it’ll try its best to respond to your request. This often looks like plausible-sounding traffic estimates, keyword lists, and competitive assessments that are partially or entirely fabricated. Here’s what we exported and why each piece matters. Export 1: Organic Research > Pages (top 100 by estimated traffic) This report tells you which pages are winning. Key columns include the URL, estimated traffic per page, number of ranking keywords per page, the intent breakdown (commercial, informational, navigational, transactional), and the traffic change column that shows momentum. For example, a page pulling 14,500 visits from 1,632 keywords is a different asset from a page pulling 400 visits from 12 keywords. The intent split tells you why that traffic matters. Export 2: Organic Research > Positions (top 100 keywords by traffic) This export tells you which keywords are winning. Key columns here are keyword and position, search volume, keyword difficulty , search engine results page (SERP) features (image packs, video carousels, and People Also Ask), and keyword intent tags. Instead of telling you which URLs perform best, this report reveals which search queries drive the most traffic. You need both reports for a complete picture. The export checklist For each competitor and for your own site, pull: Semrush Organic Research > Pages, top 50-100, sorted by traffic. Semrush Organic Research > Positions, top 100-500, sorted by traffic. Semrush Keyword Gap report (optional). Screaming Frog crawl with URLs, titles, H1s, word count, crawl depth, and internal links. This optional report adds structural context (like how deep pages are buried in the site architecture) that the Semrush exports don’t include. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with Conduct a 20-minute competitive review Next, feed your exports into your AI assistant. Ask it to do three things: classify, cluster, and compare. Topic taxonomy (per site) Here’s the prompt I used: I'm going to give you a Semrush Organic Pages export for a website. Each row is a URL with its estimated organic traffic, number of ranking keywords, and intent breakdown. Please: 1. Assign each URL to a topic category (e.g., "Product - Roof Racks," "Editorial - Buying Guides," "Support - Technical," "Category - Inventory") 2. Assign a page type: Homepage, Product Page, Category Page, Editorial/Guide, Blog Post, Support/Info, Landing Page, or Other 3. Create a summary table showing: topic category, number of pages, total traffic, and dominant intent Rules: - Base classifications on the URL path and any context available. Do NOT guess traffic numbers or keyword data. Use only what's in the export. - If a URL is ambiguous, flag it as "needs manual review" rather than guessing. - Group similar topics (e.g., don't create separate categories for "off-road accessories" and "off-road bumper kits." Cluster them). - After classifying, list any URLs where you're less than 80% confident in the classification. I'll verify those manually. Here's the data: [PASTE PAGES EXPORT] For Site Y, Claude identified seven topic clusters across 100 pages. Here’s the summary: Topic ClusterPagesTrafficDominant intentHomepage/Brand314,651Mixed (commercial and informational)Buying guides and comparisons25~10,600Informational and commercialRoof racks and cargo (product)2~5,100Commercial and transactionalBumpers and armor (product)38~2,300CommercialInstallation and how-to content4~1,300InformationalInventory/Category4~540TransactionalOther (brand, manufacturer, thin)24~1,300Mixed Even before comparing competitors, this taxonomy tells a story. Our client’s organic traffic is driven more by editorial content (buying guides and comparisons) than by all product pages combined. In fact, a single buying guide pulled 7,336 visits on its own, and the top product page drove 5,021. That editorial strength is both a strategic asset and a vulnerability, since editorial rankings can be more volatile than product page rankings. Competitor comparison Once you’ve created a taxonomy for each site, use this prompt to compare them: I now have topic taxonomies for three competing sites in the same niche. I'm going to give you the summary tables for all three. Please: 1. Build a comparison table showing how each site's traffic distributes across topic categories 2. Identify each site's "content strategy signature": what type of content drives the majority of their organic traffic 3. Flag any categories where one site dominates and the others are weak or absent 4. Note the traffic concentration: what percentage of each site's total traffic comes from their top 3 pages Rules: - Use only the data provided. Do not estimate or infer traffic for categories not present in a site's export. - If a category doesn't exist for a site, mark it "Not present" rather than zero. We don't know if they have content there, only that it doesn't appear in their top 100. Site Y taxonomy: [PASTE] Competitor A taxonomy: [PASTE] Competitor B taxonomy: [PASTE] When we used this prompt, Claude revealed three completely different strategies from the same niche. Site YInfo/support pages (60 of the top 100)Competitor BContent strategyEditorial-ledUtility/support-ledProduct page-ledTop content typeBuying guides and comparisonsInfo/support pages (60 of top 100)Product pages and category pagesNon-homepage hero pageTow capacity and fitment calculator (7,336 visits)Bolt pattern lookup guide (1,245 visits)Off-road bumper category (3,200 visits)Traffic concentration (top three)75.3%81.2%71.8%Estimated traffic (top 100)35,6817,01711,093MomentumGrowing (+1,743 net)Flat (-264 net)Declining (-1,525 net) Manually developing this comparison could require hours of spreadsheet work between categorizing 300 URLs, building pivot tables, and trying to spot patterns across three tabs. But Claude did it in minutes. The pattern recognition alone (three completely different strategies from three sites selling in the same market) is genuinely valuable output. The numbers show that Site Y pulls five times the organic traffic of Competitor A and three times that of Competitor B, despite all three competing in the same space. Competitor A’s second-highest traffic page is a bolt pattern guide on a support subdomain. Competitor B is losing ground fast, with its top category page dropping by 1,184 visits. If you’re running a competitive analysis and you don’t spot patterns like these, you’re missing the strategic story behind the data. Apply human judgment If you were to stop after generating the clusters and comparison chart, you’d have a plausible-looking competitive analysis. But the AI-generated output needs human intervention before you make any strategic decisions. Check the classifications Spot-check 10-15% of classifications by visiting the URLs. Correct the taxonomy, and then re-run the comparison. This turns an 85% accurate first draft into one with 95% or higher accuracy. The “confidence flag” line in the prompt (“list any URLs where you’re less than 80% confident”) saves you from having to guess which ones to check. If you skip this step, the misclassifications can distort your entire competitive profile. For example, when I checked Claude’s page classifications against the actual live pages, roughly 15% needed correction. It tagged a product comparison page as a blog post. It classified a regional landing page as a category page. And it lumped an FAQ page into the “Other” category even though it served as the site’s primary buyer’s guide for a specific product line. These misclassifications were the kind of accidental calls that come from categorizing URLs by path structure alone, without seeing the page content. For example, if a URL path says /blog/best-off-road-accessories/, AI assistants will call it a blog post even if the page functions as a commercial comparison guide. Consider the intent AI assistants can surface data points in seconds, but they can’t make strategic calls for you. Interpreting the data requires understanding your client’s business model, their authority level, and their content capacity. I’ve seen teams burn an entire content sprint on high-volume informational keywords that drove plenty of traffic and zero leads. If the intent doesn’t match your business goals, the volume is irrelevant. For example, Competitor A’s second-highest-traffic page is a bolt pattern lookup guide, pulling 1,245 visits per month. Claude flagged this as a content strategy gap for Site Y, since our client had no equivalent utility content. While this is technically correct, it’s strategically misleading. The bolt pattern guide targets purely informational intent. So, the page builds authority and earns links, but it’s not a commercial driver. While it can be helpful to create utility content like this, it should be a steady background effort, not a priority sprint. The commercially relevant gaps (product categories, buying guides) come first. Use this prompt fix: For each opportunity you flag, check the intent breakdown from the Semrush data. If more than 60% of the traffic is informational or navigational intent, flag it separately as "authority builder, not direct conversion driver" so I can prioritize accordingly. Compare the SERP reality vs. the ranking position AI assistants work from the position numbers and volume data in your SEO reports. They don’t know what the SERP looks like. For example, Claude saw that Site Y ranks Position 3 for “off-road roof rack” (22,200 monthly searches, driving 1,443 visits) and treated it as a straightforward optimization opportunity. Push the page to position one, and capture more traffic. Simple. But in reality, the SERP is packed with rich features: popular products, an image pack, and People Also Ask. The traditional organic blue links appear barely above the fold on desktop and well below the fold on mobile. Ranking in position one likely wouldn’t deliver the traffic increase you’d normally expect from a 22,200-volume keyword because the SERP features absorb most of the clicks. For your top five or 10 priority keywords, do a manual SERP check. If the page is dominated by shopping carousels and video results, then a traditional organic push may not be the right play. Instead, a product feed optimization or video content strategy might be more effective. Get the newsletter search marketers rely on. See terms. Do a gap analysis Your SEO tool already has a keyword gap report. But a raw list of missing keywords isn’t a strategy. Use it as a starting point. Then, let AI clusteri those gaps into themes, tiering them by intent and business relevance and turning raw gap data into prioritized actions. Start with the tool data We pulled two Semrush Keyword Gap reports comparing Site Y against both competitors. They revealed: Missing keywords: 217 keywords where both competitors rank and Site Y doesn’t appear at all. Combined search volume ~49,700/month. Weak keywords: 106 keywords where Site Y ranks but gets outperformed by both competitors. Combined search volume: ~33,650/month. Feed the gap data to AI Use this prompt with your AI assistant: I'm going to give you two Semrush Keyword Gap reports: 1. MISSING: keywords where both competitors rank and Site Y doesn't 2. WEAK: keywords where Site Y ranks but competitors outrank us Each row includes: keyword, intent tags, search volume, keyword difficulty, CPC, and the ranking position for each site. Please: 1. Cluster the keywords into thematic groups (e.g., "bumpers," "roof racks," "overlanding gear," "light bar kits," "torque specs/fitment"). A keyword can only belong to one cluster. 2. For each cluster, provide: number of keywords, total search volume, dominant intent, and average keyword difficulty. 3. Separate the clusters into tiers based on intent: - Tier 1 (Commercially relevant): Clusters with predominantly commercial or transactional intent that align with the site's core product/service offering - Tier 2 (Adjacent commercial): Clusters that are commercially relevant to the broader market but may not be the site's primary product focus - Tier 3 (Authority builders): Clusters with primarily informational or navigational intent that build topical authority but are unlikely to drive direct conversions Note: I will review the tier assignments and adjust based on business model fit. AI should make its best guess and flag any clusters where the tier assignment is uncertain. 4. Within each tier, sort by combined search volume 5. Flag any keywords that are branded competitor terms (e.g., a competitor's product or brand name). These are generally not pursuable gaps 6. For the WEAK keywords, separate into "close wins" (Site Y in positions 1-10) vs. "long shots" (Site Y in positions 50+) Rules: - Use ONLY the keywords in these exports. Do not suggest keywords not present in the data. - If intent data is missing or ambiguous, mark it "verify manually" rather than guessing. - Do not invent search volume or ranking data. If a field is empty, say "not available." MISSING keywords: [PASTE] WEAK keywords: [PASTE] When we used this prompt with Claude, clear thematic clusters emerged from the 217 missing keywords: ClusterKeywordsCombined volumeDominant intentClaude’s tierBumpers / skid plates30+~12,000/moCommercial1Roof racks / cargo systems10+~8,000/moCommercial1Winches (for sale)15+~5,500/moTransactional1LED light bar kits12+~3,200/moCommercial1Overlanding gear / overlanding accessories10+~2,800/moCommercial1Torque specs / installation guides8+~1,500/moInformational3Branded competitor terms6+~1,200/moNavigationalSkip Correct AI’s priorities This step determines where you spend the next quarter’s content budget, so human judgment is essential. If you let an AI assistant set your content priorities based purely on search volume and intent labels, you’ll end up chasing someone else’s market instead of dominating your own. Volume is seductive, but business alignment is what drives revenue. For example, Claude clustered 323 keywords and tiered them by intent in minutes. But it assigned bumpers/skid plates (~12,000/month volume) the same priority as overlanding gear (~2,800/month) because it doesn’t know what Site Y sells. Without our human override, we may have built our content calendar around the wrong cluster. ClusterClaude’s tierCorrected tierReasoningOverlanding gear / overlanding accessories11: Core businessDirectly aligned with Site Y’s primary product line. These are the keywords that drive qualified buyers.Bumpers / skid plates12: AdjacentHigh volume, commercially relevant to the broader market, and Site Y stocks some of these products. Worth targeting through editorial/guide content over time, but not the priority sprint.Roof racks / cargo systems12: AdjacentRelated to what Site Y does, but not the core offering.Winches (for sale)12: AdjacentTransactional intent is appealing, but these are a different product category.LED light bar kits12: AdjacentRelated market, but not core inventory.Torque specs / installation guides33: AuthorityInformational content that builds topical relevance. Steady background effort.Branded competitor termsSkipSkipCan’t realistically win these anytime soon. Identify small pushes that make big differences Next, find the low-effort opportunities with the biggest payoffs. For example, from 106 weak keywords, we separated 17 close wins where Site Y already ranks in positions one through 10. These have real potential: KeywordVolumeSite Y PositionBest Competitor PositionGapoverlanding accessories1,600312 positionsoverlanding gear720312 positionsoverlanding roof rack720413 positionsoverlanding accessory kit590312 positionsoverlanding storage system390312 positionsoverland vehicle accessories320312 positionsoverland accessories260312 positionsoverlanding cargo rack210312 positions Site Y sits at position three across virtually every “overlanding” variant, while Competitor A holds position one. These are optimization opportunities. A focused push toward better on-page targeting, internal linking adjustments, and content updates incorporating “overlanding” language more explicitly could flip several of these to position one or two. That’s a different action than writing a new page. Claude would have defaulted to the latter if we hadn’t split the data into close wins and long shots. Factor in authority context As a final validation step, pull the backlink profiles for your competitors. When we did this, we found that both had relatively thin link profiles. Competitor B had 199 backlinks with an average page authority score of just 1.1 (on Semrush’s 0-100 scale), while Competitor A had 128 backlinks, averaging a 3.1 authority score. The highest quality links for both came from the same handful of overlanding and off-road vehicle publications. The most-linked pages and the top organic pages barely overlapped for either competitor. Only the homepages appeared in both lists. Competitor B’s top backlinks pointed to product pages, while its top organic traffic came from category pages. Competitor A’s best links came from editorial features, while their organic traffic was dominated by the homepage and a support page. This tells us their organic rankings are driven more by topical relevance and on-page SEO than by direct link equity to individual pages. It means the keyword gaps we identified are likely winnable through content and optimization rather than requiring a major link building campaign. Turn the gap analysis into a brief Use your competitor analysis to draft a content brief with AI. Input this prompt: Based on the gap analysis we ran, [DESCRIBE PRIORITY CLUSTER] emerged as a priority. Draft a content brief for optimizing the existing presence and/or creating a new page to capture this cluster. Include: 1. Primary and secondary target keywords (from our data only) 2. Recommended page type and format (based on what's currently ranking for these terms) 3. Content structure with suggested H2s 4. Content elements the ranking competitors include that our page should match or exceed 5. Estimated word count range based on competing content Then, in a separate section called "Differentiation: For Human Review," suggest 3 possible angles that would make this page genuinely different from what already ranks. These are suggestions for me to evaluate, not final decisions. Before finalizing the brief, cross-reference the target keywords against Site Y's existing pages export. Flag any existing pages that already rank for or target similar keywords. These are potential cannibalization risks that need to be resolved before creating new content. Rules: - Do not fabricate competitor content details. Base element recommendations on what we know from our data (URLs, page types, keyword footprints) - If you need information you don't have (e.g., actual competitor page content), say "manual review needed: [specific thing to check]" rather than guessing From this prompt, Claude drafted a clean brief with target keywords from our data, recommended format (long-form guide with product integration), and an H2 structure. It also performed a cannibalization check. Because we added a cross-reference line to the prompt, Claude flagged that Site Y already had a related page pulling 838 visits. If we’d created a new page without checking, it would have competed with the existing page. That one line in the prompt saved us from unnecessary internal competition. But the differentiation section needed human input. Only someone who knows Site Y’s brand voice and customer objections could pick the right angle from these suggested options: First-hand testing and review angle: Site Y installs and tests these products, so they can show real usage via trail tests, installation photos, and customer experiences. Comparison angle: What’s the difference between overlanding versus off-road? This directly addresses the keyword overlap we noticed in the gap data. Buyer qualification angle: Who needs overlanding gear versus who would be fine with standard off-road accessories? The experience signals (actual trail tests, customer stories, installation details) also need substantial human oversight. This is where Google’s emphasis on experience, expertise, authoritativeness, and trustworthiness meets practical execution. If you don’t have genuine first-hand experience to draw on, no amount of keyword optimization will close that gap. Run through a validation checklist Before you act on any AI-assisted competitor analysis, go through this checklist to prevent the most common errors. Data validation Base all analysis on tool exports (Semrush, Ahrefs, Screaming Frog), not AI-generated estimates. Check for export dates (if data is older than 90 days, recent algorithm updates or market shifts may have changed the picture). Use a meaningful sample size (top 50+ pages per competitor, not just top 10). Include both Pages and Positions exports. Classification validation Spot-check 10-15% of the AI assistant’s page type and topic classifications against live pages. Correct any misclassifications and re-run the comparison. Check whether AI created overly granular or overly broad categories. Verify that pages on subdomains or unusual URL structures were classified correctly. Intent validation Check intent tags (not just search volume) on all flagged opportunities. Separate commercially relevant gaps from informational and authority-building gaps. Verify intent interpretation with a manual SERP check on your top three to five priority keywords. Make a conscious decision to pursue, defer, or skip high-volume informational keywords. Prioritization validation Confirm your AI assistant’s priority ranking aligns with your business goals, not just search volume. Check whether the product or service matches what you sell if a cluster looks like tier one based on volume alone. Determine if opportunities are achievable given site authority and content resources. Confirm no opportunities are branded competitor terms you can’t realistically win. Check whether a gap is better addressed by optimizing existing content versus creating new content. Brief validation Choose a differentiation angle for AI-generated briefs (not just keywords and structure). Verify the recommended content format matches what ranks in SERPs. Confirm the brief doesn’t target keywords that your own site already ranks for. Identify E-E-A-T signals and determine what original content the page needs that AI can’t generate. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with The shift to AI-assisted SEO competitor analysis AI tools have changed where you spend your time when conducting a competitor analysis. The data gathering, clustering, cross-referencing, and initial synthesis that used to consume most of your time? AI handles that efficiently. Instead, AI assistants free up thinking time. Now, you can spend that time on the parts that determine whether your analysis leads to results: interpreting intent, validating classifications, and making strategic calls about what’s worth pursuing and what’s a distraction. View the full article
  7. The company will be using a simplified name and a new logo it says shows its unified business model, but its longstanding tagline will stay in place. View the full article
  8. Today
  9. The contract rate on a 30-year mortgage dropped for a third week to 6.35%, the lowest since mid-March View the full article
  10. When it pertains to comprehending customer experiences, implementing effective feedback methods is essential for businesses. Customer feedback surveys gather valuable quantitative and qualitative data, whereas in-app feedback prompts capture immediate reactions. Real-time chat integration allows for genuine insights during interactions. Furthermore, customer interviews and focus groups provide deeper qualitative insights, and social listening helps monitor online conversations. Each method contributes to a more customer-centric approach, but how can you effectively integrate these strategies for maximum impact? Key Takeaways Utilize customer feedback surveys with a mix of question types for concise, unbiased insights on user experiences. Implement in-app feedback prompts for immediate responses, increasing participation rates up to five times. Integrate real-time chat to gather genuine feedback during customer interactions, enhancing service strategies. Conduct customer interviews and focus groups to explore deeper qualitative insights into customer needs and preferences. Monitor social listening and online reviews to capture real-time insights and address customer pain points effectively. Customer Feedback Surveys Customer feedback surveys are essential tools that gather both quantitative and qualitative data about user experiences, preferences, and satisfaction levels with your products or services. These surveys serve as effective feedback collection tools, offering insights into the customer feedback process. To guarantee you collect valuable customer experience feedback, keep surveys concise, unbiased, and include a mix of closed-ended and open-ended questions. This approach encourages thorough feedback as it minimizes respondent fatigue. One popular method is the Net Promoter Score (NPS), which categorizes customers into promoters, passives, and detractors, helping you gauge loyalty and advocacy. Timing plays an important role in survey distribution; embedding feedback widgets on your website or app allows for real-time feedback collection during user interactions, leading to higher response rates. In-App Feedback Prompts How can in-app feedback prompts transform the way you gather insights from users? These prompts, consisting of micro-surveys with 2-3 questions, allow you to capture user feedback immediately after interactions. By utilizing customer feedback collection tools, you can considerably boost response rates, yielding up to 5x higher participation compared to traditional methods. Smart triggers based on user behavior guarantee that feedback requests appear at ideal moments, improving their relevance without disrupting the user experience. Here’s a quick overview of the benefits: Benefit Description Impact on Strategy Timely Insights Captures feedback right after interactions Improves customer feedback strategy Higher Engagement Increases response rates dramatically Improves customer feedback solutions Seamless Integration Fits naturally within the app Streamlines how to gather customer feedback Incorporating in-app feedback prompts is essential for grasping the importance of customer feedback and acting on feedback from clients effectively. Real-Time Chat Integration Integrating real-time chat into your customer service strategy can greatly improve the way you gather feedback. With real-time chat integration, you can collect immediate insights from customers during interactions, capturing genuine feedback as issues arise. This method allows you to implement proactive triggers that prompt customers for feedback based on their specific behaviors. As a result, you can achieve response rates up to five times higher than traditional feedback methods. Customer Interviews and Focus Groups When seeking to comprehend customer needs and preferences, interviews and focus groups provide valuable qualitative insights that quantitative surveys often miss. These methods allow you to plunge deeper into customer feedback, uncovering the reasons behind their sentiments and shaping your product roadmap accordingly. Gain a nuanced comprehension of customer perceptions. Encourage collaboration and shared insights through group discussions. Create a more responsive, customer-centric culture. Social Listening and Online Reviews Social listening and online reviews are vital components of modern customer feedback strategies, as they provide valuable insights into customer opinions and behaviors. By monitoring social media conversations, you can gain real-time feedback on customer sentiment and emerging trends. Engaging with online reviews is equally important, as 93% of consumers say these reviews influence their purchasing decisions. Responding to both positive and negative feedback builds brand trust, as 70% of consumers expect brands to acknowledge their reviews. Method Benefits Social Listening Real-time customer insights Online Reviews Influence on purchase decisions Customer Feedback Identify pain points Customer Retention Improve overall experience Utilizing social listening tools and analyzing online reviews reveals common themes, allowing you to address specific concerns, improve the customer experience, and eventually enhance customer retention. Frequently Asked Questions What Is the 10 to 10 Rule in Customer Service? The 10 to 10 rule in customer service emphasizes responding to customer inquiries within ten minutes and ensuring a resolution or follow-up within ten hours. This approach nurtures quick engagement, which improves customer satisfaction and retention rates. By prioritizing timely communication, you streamline support processes and build trust with your customers. Implementing this rule not just enhances the overall customer experience but additionally positions your business favorably against competitors who may not prioritize swift responses. Which Tool Is Most Effective in Gathering Customer Insights? To gather customer insights effectively, consider using real-time feedback tools like Zendesk or Drift. These platforms integrate with your support operations, capturing customer input immediately after interactions. Furthermore, in-app surveys offered by tools such as Intercom can greatly increase response rates, providing timely data. For organized feedback management, platforms like UserVoice streamline feature requests, allowing you to prioritize based on user impact, ensuring you act on insights efficiently and effectively. What Is the Most Immediate Way to Gather Customer Feedback? The most immediate way to gather customer feedback is through real-time methods like in-app surveys or live chat integrations. By triggering these feedback requests during user interactions, you capture authentic reactions as their experience is fresh. Contextual micro-surveys that focus on specific user actions can greatly increase response rates. Furthermore, automated follow-ups after live chat sessions allow customers to provide instant feedback, enhancing your ability to address issues swiftly and effectively. What Are the 3 C’s of Customer Satisfaction? The 3 C’s of customer satisfaction are Consistency, Communication, and Customer Experience. Consistency guarantees you deliver the same high-quality service across all touchpoints, which builds trust. Communication involves actively listening to your customers, responding quickly, and addressing their feedback, enhancing their perception of your brand. Finally, Customer Experience encompasses every interaction a customer has with your business, where positive experiences can considerably boost loyalty and retention rates. Focusing on these three elements is essential for success. Conclusion Incorporating these five effective customer feedback methods can greatly improve your comprehension of customer experiences. By utilizing customer feedback surveys, in-app prompts, real-time chat, interviews, and social listening, you can gather valuable insights that drive improvements. These approaches not just boost response rates but additionally cultivate a more customer-centric culture within your organization. In the end, leveraging these strategies will help you improve customer satisfaction and retention, ensuring your business remains competitive and responsive to changing needs. Image via Google Gemini This article, "5 Effective Customer Feedback Methods for Instant Insights" was first published on Small Business Trends View the full article
  11. Previous bid for FTSE 100 group from Swedish private equity firm rejectedView the full article
  12. AI search is caught in a self-reinforcing loop, where synthetic content feeds retrieval systems that present it back as fact. The post AI Search Is Eating Itself & The SEO Industry Is The Source appeared first on Search Engine Journal. View the full article
  13. We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication. The Samsung Galaxy S26 is down to $799.99 for the 256GB unlocked version, a drop from $899.99 and its lowest price so far, according to price trackers. This is Samsung’s smallest flagship for 2026, and it leans into that idea of giving you most of the high-end experience without the size or cost of the Ultra model. The design feels familiar if you have used a Galaxy before, and is relatively compact at 6.3 inches, so it sits comfortably in one hand without feeling cramped. It also comes with an IP68 rating for dust and water resistance. Samsung Galaxy S26 Unlocked Android smartphone (256GB, black) $799.99 at Amazon $899.99 Save $100.00 Get Deal Get Deal $799.99 at Amazon $899.99 Save $100.00 It runs Android 16 with Samsung’s One UI 8.1, and it is set to receive seven years of updates, which is still one of the longest support windows you will find on an Android phone. Performance is not a concern here—the Snapdragon 8 Elite Gen 5 processor for Galaxy keeps everything fast, whether you are jumping between apps, editing photos, or playing games. Plus, it has a bright and sharp display (with a 120Hz refresh rate) that holds up well outdoors. Samsung’s newer AI tools are built-in, too—you can edit photos using text prompts, clean up document scans, or get suggestions through features like Now Brief. That said, its battery life is average, with just over 15 hours of video streaming, according to this PCMag review. The triple-camera system, with a 50MP main sensor, 12MP ultrawide, and 10MP telephoto, delivers solid results in most conditions. Photos look natural, and low-light shots benefit from a brighter main sensor, though you may notice some softness compared to the Ultra model. The camera module also causes a slight wobble when the phone is placed flat, which is common but still noticeable. For most people, though, the S26 covers the basics quite well—delivering strong performance, a bright display, and capable cameras in a form factor that is easier to handle than most flagship phones. Our Best Editor-Vetted Tech Deals Right Now Apple AirPods Pro 3 Noise Cancelling Heart Rate Wireless Earbuds — $199.99 (List Price $249.00) Blink Video Doorbell Wireless (Newest Model) + Sync Module Core — $35.99 (List Price $69.99) Ring Indoor Cam (2nd Gen, 2-pack, White) — $59.98 (List Price $79.99) Apple Watch Series 11 [GPS 46mm] Smartwatch with Jet Black Aluminum Case with Black Sport Band - M/L. Sleep Score, Fitness Tracker, Health Monitoring, Always-On Display, Water Resistant — $329.00 (List Price $429.00) Apple iPad 11" A16 128GB Wi-Fi Tablet (Silver, 2025) — $319.99 (List Price $349.00) Deals are selected by our commerce team View the full article
  14. The job market is tough right now. According to the Bureau of Labor Statistics, job openings have been trending down, and are currently below pre-pandemic levels. In a hypercompetitive economy, people entering the workforce are facing fewer opportunities than just a few years ago. And for the 1 in 3 American adults with a justice-involved past, or any interaction with the criminal justice system as a defendant, their record is another obstacle in an already challenging job search. April marks Fair Chance Month, an annual opportunity to spotlight reentry programs, resources, and skills-training for formerly incarcerated people. Yet, as the conversation around second chance hiring has expanded each year, a criminal record can still reduce a candidate’s chances of a second interview by 50%. Even when people with justice-involved pasts take advantage of every opportunity, exclusionary hiring practices and systemic barriers make finding and retaining employment an uphill battle. For example, returning citizens frequently have trouble securing safe and reliable housing and transportation, and are therefore 10 times more likely to experience homelessness than the general public. When we systematically exclude people from employment because of a checked box, we’re not just denying them jobs, we’re denying them the foundation they need to rebuild their lives. BREAK DOWN BARRIERS Second chance hiring practices can—and should—be tailored to each company’s unique needs and challenges, but they have the potential to benefit any industry. Across industries and sectors, 85% of HR professionals and 81% of business leaders individuals with justice-involved pasts perform the same as, or better than employees without. This reinforces the value second chance hires can bring to the company. At Frontier Co-op, we’ve seen firsthand the tangible impact second chance hiring can make on a community. We implemented our flagship Breaking Down Barriers to Employment program in 2018 to take a more holistic approach to addressing employment barriers. It involves adopting second chance hiring practices and working with a local nonprofit partner to provide access to comprehensive wraparound services. Internally, we provide subsidized childcare options, transportation, and an apprenticeship and skills training program. Most recently, we launched a savings match program to support our workforce’s long-term resilience. We’ve seen how this has grown our workforce, as more than 25% of Frontier Co-op’s production hires in the last year were justice-involved individuals. While anonymity is critical to the program’s success, one employee—Alisia Weaver—has chosen to share her story. She began as an apprentice and has grown into her current role as a machine operator. She will celebrate her sixth anniversary this fall. As an important part of our co-op’s advocacy in this space, Alisia offers her perspective on the impact second chance hiring has had on her life and future. “This experience has helped me advance in all aspects of my life. I have my own place, a vehicle, and daycare for my son. I’ve come forward to tell my story because I just want to encourage people and inspire them not to give up, no matter what setbacks they face,” she said. “I also want to encourage companies to try something different and consider adopting second chance hiring practices. It could be beneficial for you, but it could also change someone’s life.” RETHINK YOUR HIRING PRACTICES By embracing candidates with diverse backgrounds and perspectives, we’ve seen how this approach strengthens the resilience of both our workforce and our business. Most meaningfully, it has shaped our culture in lasting ways. Over the years, many employees have stopped me to share how proud they are of our commitment to fair hiring. So many people know or love someone who has been held back by a justice-involved past, and it matters to them to see their employer offering people a truly fresh start. But we can’t make these changes in silos. As a second chance employer, we’re proud to partner with organizations like the Responsible Business Initiative for Justice (RBIJ) and REFORM Alliance, which are leading the change and helping businesses remove barriers and create career opportunities for these individuals, to ensure a more inclusive workforce for all. “Businesses play a crucial role in keeping communities safe and healthy,” said Maha Jweied, RBIJ’s CEO. “Hiring justice-impacted job seekers can break cycles of incarceration, revitalize neighborhoods, and forge pathways for people to reach their potential—and that includes those with past convictions. By prioritizing inclusive hiring, we not only demonstrate our commitment to the communities we belong to, but also enhance our organizations with capable, dedicated, and resilient talent.” We know we can’t hire everyone regardless of their past, and we don’t view this program as a rehabilitation process. Our intent is simply to eliminate a bias that could negatively impact good candidates along the hiring journey. That’s something we think every organization and company can aim to do. This Fair Chance Month, I’d challenge all business leaders to take a moment to think a little differently—a little critically—about their hiring processes. Set aside time for an open, internal conversation about whether criteria related to justice involvement may unnecessarily be limiting candidate consideration. Reach out to a colleague who is doing this work to hear more about their experience, ask candid questions, and understand the challenges they’ve navigated. My door is always open. Tony Bedard is CEO of Frontier Co-op. View the full article
  15. Plug-in solar is on the way, and it could cut your electric bills. A growing number of states are poised to pass bills supporting the panels, which are designed for DIY installation: Hang one out a window or set it on a deck, plug it into a regular outlet, and power starts flowing back into your home. A new calculator helps you estimate how much you can save on power bills, using your zip code to estimate how much sunshine you get and how much you’re paying for electricity now. The tech could be especially useful in cities like New York, where renters have steep electric bills and don’t have roofs to install traditional solar panel systems. “A huge percent of this country is composed of renters,” says Cora Stryker, cofounder of Bright Saver, a nonprofit that advocates for the technology and just released the calculator. “What are you supposed to do? I mean, it’s really a powerless feeling—pun intended—to see your energy bills just spike and not be able to do anything about it.” Homeowners who don’t want to invest in a full rooftop system can also use plug-in panels. Designed for self-installation, they avoid the costs of permitting, inspections, hiring an electrician, and the marketing expenses of solar companies, which together make up nearly half the price of traditional systems. “The reason this is a game changer is we’re taking all those extra costs out, and we’re delivering the dirt-cheap cost of the technology to consumers so they can install it themselves,” Stryker says. “It’s pushing us toward a tipping point. For years now, clean energy has been cheaper to produce than fossil fuel alternatives. However, for the consumer that is not true. This is the beginning of that.” Plug-in solar panels, also known as balcony solar, became widespread in Germany when electricity bills surged because of Russia’s war in Ukraine; their use continues to grow throughout Europe. (In Germany, they’re so common that you can buy them at Ikea.) In the U.S., regulatory hurdles are beginning to fall. Right now, though the panels aren’t illegal, they require a complicated process of approval from utilities. But states are beginning to change that. Utah was the first to pass a law supporting the tech last year, exempting consumers from the need to get approval from utilities. Maine followed this month. Bills also passed in Colorado, Maryland, and Virginia and are awaiting signatures from governors. More than 20 other states are now considering bills, from both Republican and Democratic lawmakers. Some utilities have argued that the devices pose safety risks, but advocates say that years of use in Germany have proven that they’re safe. UL Solutions, the standards organization, is currently working on certifying devices to a new safety standard that was created at the beginning of the year, though Stryker says devices on the market in Utah meet existing standards. The panels come in different sizes, ranging from around 400 watts to 1.2 kilowatts, and cost between $400 and $2,000. A small panel could cover the power used by a full-size fridge. An 800-watt system could cover that along with a TV, lights, and other small equipment like routers. “It’s most meaningful for your background electricity demand, meaning what is running all the time,” Stryker says. It’s not like a whole rooftop system, which could power your entire house. But it can still make a noticeable difference on your electricity bills. In New York City, for example, someone using a 1,200-watt panel on an apartment balcony could potentially save $339 in a year. (It’s worth noting that the calculator doesn’t attempt to include whether the panel is facing south or how much other buildings might be shading it.) In Oakland, California, someone with the same panel could potentially save $491 because of the sunnier weather. The devices could help people who are struggling the most to afford electric bills, especially low-income renters. “Because electricity varies so much in cost, it really becomes an equity issue,” Stryker says. “The people living in the densest parts of the United States have the highest electricity [rates] almost universally.” View the full article
  16. Samsung's One UI software for its Galaxy phones comes packed with features and functionality, but there are also several official extra apps made by Samsung that don't come preinstalled on its phones—and they're well worth checking out. I've already written about the various Good Lock plug-ins—which let you build your own keyboards and set separate volume levels for individual apps—but that's not all there is to explore when it comes to additional apps. There's also Galaxy Enhance-X, a tool for polishing and improving your photos and videos, as well as manipulating digital documents. Enhance-X can do everything from applying cinematic filters to pictures, to scanning in documents and translating them at the same time, and it's free to install and use. It's also just been given a major revamp, with Samsung cleaning up the app's interface as well as adding some additional features. If you use a Samsung phone, you can get Enhance-X from the Galaxy Store. Learning the basics in Enhance-XThere are now three tabs to work with in Enhance-X, part of the recent app interface revamp: Plug-ins, Home, and History. The Plug-ins tab is a good place to start, because it shows off some of the app's capabilities: Tap the download icon (the downward arrow) on FilmStyle to access nine extra filters for your pictures. These filters and many more effects can be applied to your photos and videos from the Home tab. This tab is essentially a file picker—you can select one or more photos and videos to work with. To switch to the standard Gallery app view (complete with albums and collections), tap the flower-style icon in the top right corner. Enhance-X comes with optional plug-ins. Credit: Lifehacker Pick one or more images, and you can choose between Photo tools and Doc tools (for scans) at the bottom; if you're selecting videos, there's just the Video tools option. That then takes you into the full editing interface, where you can see everything Enhance-X has to offer (including the FilmStyle filters). Use the icons at the bottom of the screen to browse through the tools, which are typically one-tap enhancements that the app will configure itself. There's Colorize for adding color to black and white photos, for example; HDR for boosting dynamic range; and Fix blur for images that aren't quite sharp enough. HDR is one of the color customization options. Credit: Lifehacker Many of these options are useful quick fixes, but there are some fun tools as well. Tap Creative then 24-hr time lapse, and you can turn any image into a short video—nothing in the image will move, but the colors will shift as if you're seeing the picture go through a full night-and-day cycle. Some of the tweaks available will vary depending on the type of image or video you've selected. Pick a portrait shot for example, and you get access to the Face tool—this gives you sliders for adjusting the smoothness and tone of the facial features, and you can adjust the strength of each effect individually. Exploring more Enhance-X featuresIf you pick Film style filters from the Suggested tab when editing a picture, you can try out the filters we downloaded earlier. Use the thumbnails to browse between the different effects and see how they work—if you tap the small "i" button to the left you get a useful rundown of what each filter does and which types of images it works best with. Over on the video tools side, you've got options like Slow mo. This presents you with a timeline of your video, and if you press and hold at any point in that timeline, Enhance-X adds a special slow-motion effect. The app lets you preview changes before applying them. Credit: Lifehacker There are also simple trimming tools for your video clips, as well as a Single take section where you get to play around with effects like rebound (which creates a video that can loop infinitely) and highlights (which picks out the best parts of the video). Each effect can be previewed on screen before saving. For documents scanned as photos, there are a host of different options. You're able to apply crops, filters (to add or remove color), text, and scribbled highlights; you can combine different scans together in one document; and you can remove any unwanted scanned elements (like fingers). There are many different actions you can take on scanned documents. Credit: Lifehacker Choose Add text, for example, and you get the option to drop a text box right on top of your scan, with settings for font size, style, and color. Whether you need to add annotations or correct mistakes on the original document, it's straightforward and intuitive to use, and means you don't have to call up a separate app or start editing on a desktop interface. Head to the History tab to review all your edits and undo them if necessary. Enhance-X is something I've kept on my Galaxy phone ever since I discovered it, and it's often come in use for edits that it can do more quickly than other apps or that other apps can't do at all—including the apps that actually come with One UI. View the full article
  17. As artificial intelligence integrates deeper into our workflows, understanding its vulnerabilities is critical. A recently exposed vulnerability known as Best-of-N (BoN) jailbreaking has redefined how we view AI safety. Here’s a breakdown of BoN jailbreaking, how the attack works, and why it creates real risk for your data, brand, and the AI tools you rely on. First, a quick vocabulary check Before getting into BoN, there are two terms you need to actually understand, not just nod at. Brute force attack: Imagine trying to crack a four-digit PIN by starting at 0000, then 0001, then 0002, all the way to 9999. No cleverness, no strategy, just trying every single combination until one works. That’s brute force. It’s dumb, slow, and works disturbingly often if nobody stops it. Stochastic: This just means random, or more precisely, probabilistic. AI models are stochastic because they don’t produce the exact same output every time you ask the same question. There’s built-in variability in how they generate responses. That’s by design. It’s what makes AI feel less robotic. It’s also a liability. Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with What is Best-of-N jailbreaking? BoN is brute force, but smarter. Instead of trying every possible combination from scratch, it exploits the built-in randomness of AI models. The logic is simple: if an AI gives slightly different answers every time, and some of those answers slip past its own safety rules, then the attacker just needs to ask enough times, in enough slightly different ways, until one version of the question gets the forbidden answer through. That’s not just a technical edge case. It means safeguards can be bypassed at scale, with direct implications for how your team uses AI tools every day. The research behind this technique describes it as a “simple black-box algorithm.” Black-box means the attacker doesn’t need to see inside the model. No access to the code, no insider knowledge required. They’re working from the outside, just like any regular user would. Think of it like a kid asking for candy when you’ve already said no. The first “no” doesn’t stop them. They rephrase, change their tone, ask at a slightly different moment, and try from a different angle. They ask another adult or wear you down, not by finding a magic phrase, but by generating enough variations that eventually one lands at the exact moment your patience runs out. BoN is that kid, automated, running thousands of variations per minute. How the attack works — and how easy it is to set up This is the part that should make you uncomfortable, because it shows how little effort it takes to turn this into a real-world risk. The setup isn’t sophisticated. Step 1: Augmentation The attacker takes a forbidden prompt, something the AI is trained to refuse, and generates hundreds or thousands of variations. Not clever rewrites, just noise: random capitalization (HoW Do I…), scrambled characters, inserted typos, and meaningless filler tokens. Ugly, broken-looking text that a human would immediately recognize as weird, but that an AI processes token by token. Step 2: Bombardment All those variations get sent to the model simultaneously, or in rapid succession, using a simple script. This isn’t a complex operation. Anyone with basic Python knowledge and access to an API can automate this. The compute cost is low. The barrier to entry is lower than most people assume. Step 3: Selection An automated grader, often just another LLM, scans all the outputs and flags the one response that bypassed the safety filter and delivered the restricted content. The attacker doesn’t read thousands of responses. The second AI does the screening for them. That’s the full attack. No special hardware, no insider access, and no advanced degree in machine learning. Get the newsletter search marketers rely on. See terms. The numbers behind BoN The original research clocked an 89% attack success rate on GPT-4o and 78% on Claude 3.5 Sonnet when running 10,000 augmented prompt variations. With just 100 variations, Claude 3.5 Sonnet still failed 41% of the time. This didn’t quietly fade into the research archives when the models got updated. It was presented as a poster at NeurIPS in December 2025. NeurIPS is the most prestigious machine learning conference in the world. And the attack has only gotten faster. Newer BoN-based techniques can now achieve comparable success rates while cutting the time to attack from hours to seconds. Meanwhile, OWASP, the gold standard for cybersecurity risk rankings, listed prompt injection, the category BoN falls under, as the No. 1 vulnerability in their 2025 LLM Top 10. The success rate also follows a predictable power-law curve, meaning attackers can mathematically forecast how many attempts they need before they break through. Forget luck, we’re talking about a calibrated, scalable operation. BoN also works across all modalities: text, images (change the font, background, and color), and audio (adjust pitch, speed, and background noise). Every format and frontier model tested. Why it’s a marketing and branding problem Cybersecurity and marketing used to be separate conversations. AI collapsed that boundary and put brand risk directly inside your AI workflows. Safety filters are porous, not protective The research is unambiguous: given enough augmented attempts, some will get through. This applies to every AI tool in your stack, whether it’s internal, customer-facing, or embedded in your content workflows. Your prompt inputs carry legal risk When your team pastes a client brief, a competitor’s ad copy, or licensed third-party content into a prompt to “get AI help,” you’re introducing material that could later be extracted. BoN jailbreaking demonstrates that copyrighted content can be physically retrieved from model weights under the right conditions. If an AI can reproduce verbatim text when sufficiently probed, that content is encoded in there. The safety filter was the only thing standing between it and the output. Brand exposure through your own AI tools If someone uses BoN to jailbreak an AI tool your brand has deployed, a customer chatbot, or a content generation tool and extracts harmful, offensive, or legally compromising output, the story doesn’t start with “AI was jailbroken.” It starts with your brand name. You know this, journalists know this, and social media content creators know this. Attack composition makes this worse BoN doesn’t operate alone. Combining it with a “prefix attack,” a carefully crafted phrase attached to the start of each prompt, boosted success rates by an additional 35% while requiring fewer attempts. The technique actively evolves toward greater efficiency. What you should do now Audit what goes into your prompts Treat prompt inputs with the same sensitivity you’d apply to data under GDPR. Licensed content, client briefs, proprietary information — none of it belongs in a third-party AI tool without a clear data policy from the vendor. Stop treating safety filters as compliance If your AI vendor says the model is safe and that settles it for you, you’ve outsourced your risk assessment to the party that profits from minimizing it. Output monitoring, anomaly detection on request volume spikes, and continuous red-teaming are due diligence. Understand that the attack surface spans every modality Text, image, and audio. BoN applies across all of them. If your brand uses any AI-powered tool that handles user inputs in multiple formats, the vulnerability applies. Log everything Prompts in, outputs out. If an incident happens, legal will ask what the model was given and what it produced. Without logs, you have no defense and no evidence. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with What BoN jailbreaking reveals about AI safety limits The same built-in randomness that makes AI useful for creative and marketing work makes it exploitable at scale. BoN jailbreaking is an active, validated, and accelerating threat that the cybersecurity community is racing to defend against. Most marketing teams haven’t yet priced in the brand, legal, and reputational stakes. The ones that do first will build defensible practices before they need them. The rest will learn it through an incident they didn’t see coming, and won’t be able to explain after the fact. View the full article
  18. If you’ve been building consumer hardware for any real amount of time, you know the pattern. Most of these shifts start the same way. The sensor exists, but it’s stuck in clinical settings where it’s expensive, awkward, and not something anyone would realistically use day to day. At some point, someone figures out how to shrink it down enough to fit into a real product, and a few companies take an early shot at turning it into something people actually want. Early on, it’s easy to dismiss. It looks niche, maybe even like a gimmick. But adoption starts to build, usually more gradually than people expect at first. Then it picks up, and within a product cycle or two, it stops feeling optional and just becomes part of the baseline. That’s typically the point where it becomes clear who planned for it and who didn’t. And if you didn’t, you’re trying to retrofit something fundamental into a product that wasn’t designed for it. In almost every case, most of the market waits. Not for the technology but for validation from a small set of industry leaders. By the time that signal arrives, the category is already defined, and the leaders are already ahead. Heart rate monitoring is the textbook case. Electrocardiography has existed since the early 1900s. For decades, continuous heart rate data meant a clinical setup or, at a minimum, a chest strap and a willingness to look like you were under house arrest while jogging. Then optical sensors got small and cheap enough to sit on a wrist. Polar shipped the first wireless heart rate monitor in 1977, and it was built for elite Finnish cross-country skiers, not for everyday users. For a long time, that kind of data stayed in that world, or at least required gear most people wouldn’t bother with. Then Fitbit brought heart rate into a simple wristband, Apple built it into a watch, and it gradually became part of how people expected these devices to work. At this point, it’s hard to imagine a fitness product without it. What used to feel specialized is now just assumed. The entire category was reorganized around a sensor that used to require a hospital visit. What’s easy to forget is that consumers didn’t ask for this. Apple and the companies that followed turned heart rate into a requirement before most people knew why it mattered. Once it was there, it became unthinkable to ship without it. Enter Brain Sensing Brain sensing will follow the same path. The first companies to integrate it won’t be responding to demand so much as shaping it. And once users experience products that adapt to their cognitive state, going back will feel like a downgrade. Active noise cancellation did the same thing to headphones. Bose had the science for years, originally developed for aviation, before Sony and Apple turned it into a consumer expectation that redrew the entire competitive map in premium audio. If you were making $300 headphones without ANC by 2020, you weren’t in the conversation. The companies that waited didn’t lose because the tech was unclear they lost because they waited for confirmation. We’re seeing this now in the age of AI. Google invested heavily into AI research for years, improving internal processes and products with LLMs since the late 2010s. It wasn’t until ex-Google employees came up with the idea to launch a chatbot (in the form of ChatGPT) that AI became a mainstream term (and prompted Google’s famous “code red” initiative at the end of 2022). The technology didn’t suddenly appear, but the shift in market perception forced everyone else to react. What’s worth noticing is that in every case, the underlying technology was well understood long before anyone productized it. Science wasn’t the bottleneck. The engineering was shrinking the sensor, solving the noise problem, making the experience seamless enough that a normal person never thinks about the technology underneath. That’s exactly where we are right now with brain sensing. And the product category it’s going to hit first is everything worn on or around the head. What’s taking so long? Which raises a reasonable question: if the brain is the most important organ we have, why hasn’t anyone turned brain data into a consumer standard already? Electroencephalography EEG has been measuring the brain’s electrical activity since 1924. Hans Berger, a German psychiatrist, captured the first recording of human brainwaves almost exactly a century ago. Since then, EEG has become one of the most widely used measurement tools in clinical neuroscience. It’s standard in hospitals for diagnosing epilepsy, evaluating traumatic brain injuries, studying sleep disorders, and flagging early markers of neurodegeneration. This is not emerging science. This is established, validated, battle-tested science that has been sitting there waiting for someone to solve the product problem. The limitation was never understanding the brain; it was making the technology disappear into a product people would actually use. The basics: your brain emits tiny electrical signals every time neurons fire in coordinated patterns. Just like EKGs pick up the electrical pulses from your heart, EEG detects the electrical pulses from your brain. The best part? It’s completely noninvasive. The user doesn’t feel a thing. And when you process those signals well, they tell you a surprising amount about how someone’s brain is actually performing in real time. So why has it taken a hundred years for this to land in a consumer product? Because three hard engineering problems were stacked on top of each other, and until recently, no one had solved all three. The sensors were a nonstarter for consumers. Clinical EEG uses wet electrodes, metal discs that need conductive gel, a skilled technician, and a setup process that takes 20 to 45 minutes (or more). The caps can run anywhere from 64 to 256 electrodes wired across the scalp. Outstanding data. Zero chance anyone’s doing that before their Monday standup. What changed is material science. Soft, dry, conductive fabric sensors can now capture EEG signals from the skin on the head, around or in the ear with enough fidelity to produce research-grade data. They integrate directly into the ear cushions of headphones, so the form factor and comfort stay the same, and the user doesn’t have to think about them at all. Brain signals are absurdly quiet. I mean absurdly. We’re talking microvolts, one millionth of a volt. A single jaw clench can generate electrical noise orders of magnitude louder than the brain signals you’re trying to read. In a controlled lab, you can manage that. In the real world, where your customer is walking through an airport or grinding their teeth during a Zoom call, the signal-to-noise ratio is a nightmare. This is where AI earned its keep, and I mean years of earning it, not a model someone fine-tuned over a weekend. Machine learning systems trained on thousands of hours of real-world brain data from thousands of users can now isolate neural activity from muscle artifacts, electrical interference, and movement noise, in real time, on compact hardware. Some of these models have been validated through work with the Department of Defense and partnerships with clinical institutions. The signal processing is the moat. It’s what separates legitimate consumer EEG from the wave of pseudoscience wearables that have come and gone over the past decade, and there have been plenty. It had to be invisible. The final step The last piece is pure product engineering. EEG systems that once needed dedicated amplifiers and bundles of wires now run on the same Bluetooth chips and battery budgets as premium noise-canceling headphones. Multi-channel EEG, 250 to 500 Hz sampling rate, wireless data transmission all inside an ear cup, with enough juice left to maintain typical battery life. The user puts on headphones. The brain sensing just happens. What matters is that these three breakthroughs compounded. Better sensors generated cleaner data. Cleaner data trained better models. Better models meant you could extract more signal from fewer, smaller sensors. That flywheel is what finally moved brain sensing from “technically possible in ideal conditions” to “shipping in consumer hardware.” In other words, this is no longer a research problem. It’s a product decision. If you’re evaluating this for your roadmap, this is where things tend to matter most , because overclaiming is rampant in this space, and it erodes trust fast. Consumer-grade EEG has been validated in DoD-reviewed research and in real-world deployment for detecting changes in cognitive state over time. The brain’s electrical oscillations fall into well-characterized frequency bands (delta, theta, alpha, beta, gamma), and the relative power across those bands shifts in predictable ways with different mental states. That’s the foundation. In practice, a few applications are already reliable today: Focus and attention detection is the most robust application. Distinguishing sustained concentration from mind wandering, backed by substantial published research. Being able to proactively recommend an intervention when focus starts to drop, in some cases, hours before they’d normally take a break. Cognitive fatigue detection identifies declining mental performance before the person subjectively notices it. This has been validated across populations from office workers to military personnel, and it’s one of the most immediately useful applications for product integration. Imagine your earbuds coaching you through the last mile of a long race when they detect your cognitive resources need it most. That’s the kind of differentiator this technology can enable. Cognitive load estimation is how hard the brain is working on a given task. Relevant for UX research, adaptive interfaces, gaming performance, and workplace optimization. Crucial across military, driver, and pilot use cases to pull someone out before accidents happen. Longitudinal brain health trends track shifts in baseline brain activity over weeks and months. These patterns correlate with sleep quality, stress levels, and aging. The research on whether they can serve as early indicators of neurological change is promising but still maturing. It’s worth watching closely, but it would be irresponsible to overstate where the science is today. What makes this different from earlier biosensors is how the data gets used. Heart rate data PPG is retrospective. It tells you what has already happened to your body. EEG is real-time and bidirectional. The system detects a shift in your cognitive state and responds to it immediately. That’s not a subtle distinction. It’s the difference between a dashboard that tells you what already happened and a system that actively changes with your performance in real time. The closed-loop potential, where the product adjusts audio, pacing, content, workload, or alerts based on live brain state, is the innovation that makes this genuinely new territory. No previous consumer sensor has enabled this. The limits Now, what EEG does not do: it does not read thoughts. It does not decode what someone is thinking about. It measures how the brain is performing, not what it’s processing. The applications right now are wellness and performance, not clinical diagnosis. That line matters, scientifically and regulatorily, and any partner worth working with will be clear about it. If someone tells you their EEG can do more than this today, ask to see the published validation data. The credible players in this space welcome hard questions. The others deflect them. If you’re running a product org for headphones, gaming headsets, earbuds, AR glasses, helmets, hearing aids, or anything head-worn, the integration math looks like this: The physical footprint is smaller than most people expect. Unobtrusive, comfortable sensors embedded in existing ear tips or cushion form factors. A firmware layer handling signal acquisition and transmission. A software platform is doing the processing. If your product already makes contact with the skin in or around the ear or on the head, you’re working with a compatible starting point. The industrial design disruption can be minimal, the sensors are invisible to the end user, and you’re not asking your customers to do anything differently. You also don’t have to build a neuroscience team. The technology stack, sensors, firmware, signal processing, AI models, and app infrastructure are licensable. Think about the model Qualcomm established for mobile connectivity or what Dolby did for audio processing. Deep technology, integrated into your product, without requiring a decade of R&D you haven’t done. The hard years of data collection, algorithm training, and clinical validation already happened. You’re buying the outcome, not the journey. And what most hardware companies miss: this isn’t a feature add. It’s a new computing layer, one with a roadmap that compounds over time, and with revenue models that pure hardware doesn’t support: subscriptions, premium tiers, enterprise licensing, data partnerships. The companies integrating now aren’t just acquiring a sensor. They’re taking a position in a platform that’s still being built, at a moment when that position is still available. And the feature set is meaningful and available today. Focus tracking, fatigue detection, cognitive health insights, personalized performance coaching, and brain break prompts. Devices with these features are already shipping, and early data shows two out of three users reporting measurable improvements in daily focus. That’s the kind of engagement metric that supports premium pricing and retention. This is also just the baseline. New biomarkers and applications are in active development, sleep biofeedback is already in the pipeline, and the platform roadmap keeps expanding as more real-world data gets collected. How much that matters will depend on whether you’re in a position to take advantage of it. The gaming wearables market is projected to grow from $5 billion to nearly $20 billion by 2034. The BCI market overall is expected to exceed $52 billion globally in the same timeframe. Brain sensing headsets are already winning “Best of CES” awards. This isn’t a niche technology looking for a market. The market is forming in real time. A compounding advantage One part that doesn’t get discussed enough is that brain data has a compounding advantage. The companies that start collecting it first build better models. Better models attract more users. More users generate more data. That flywheel is extremely difficult to replicate once a competitor has a multi-year head start on it. If you’ve watched what happened with fitness data ecosystems, how hard it is to switch away from a platform that has years of your health history you understand why the early mover advantage here isn’t just about features. It’s about the data layer underneath. At this point, it’s less about whether this works and more about whether you’re early enough to matter. If I were sitting in a product review evaluating whether to pursue brain sensing integration, the questions I’d focus on are: On integration, what’s the BOM impact? What changes in my existing ID? What does sensor contact look like across different head shapes and hair types? What happens when contact is bad? Does the system fail silently, throw errors, or degrade gracefully? On the platform, what does the user see, and through what interface? How much processing happens on device versus in the cloud? How is sensitive brain data protected? What’s the privacy architecture? (This one is non-negotiable, and regulators are already circling Colorado, which passed the first state privacy act that explicitly includes neural data as protected information.) On the business, what’s the evidence on willingness to pay for cognitive features? Which verticals are moving fastest? What does the regulatory landscape look like if I want to make wellness claims versus health claims? A good partner has clear answers to all of these. If they’re hand-waving on any of them, you’re in the wrong conversation. Heart rate monitoring existed for a century before it became a consumer standard. Active noise cancellation sat in aviation for decades before it redefined headphones. AI-supported internal products and infrastructure at Google for nearly a decade before chatbots were widely adopted. In all cases, science was never the holdup. The product packaging was. And in all cases, the companies that moved early didn’t just have a feature advantage, they defined what the category became. Brain sensing is on this same path. The science is validated. The engineering is solved. The form factors are ready. The first products are shipping and winning awards. At this point, it mostly comes down to timing and whether you’re early or playing catch-up. You’ve watched this exact pattern play out before. You know how it ends for the companies that wait. View the full article
  19. Cracks appear in cabinet following evidence from sacked head of Foreign OfficeView the full article
  20. A video I posted to Instagram on a whim hit 110,000 views this month. I originally made it for LinkedIn as part of my content pillars: a simple video on where to find remote jobs. Somewhere in the middle of exporting it, I thought, why not just post it everywhere? So I did. And Instagram was where it really took off. Which was strange, because I've been on Instagram since 2016. That's 10 years of birthday carousels and travel photo dumps, and never once treating it like somewhere I could actually grow. If you've been following my Proof of Concept series, you know I've spent the last few months trying to grow on Threads. I had a plan, complete with a follower goal, a deadline, and the results of applying all the rules I recommend to other people. I didn’t get to 1,000 on Threads by December 15, 2025. And after one accidental crosspost, I'm not sure Threads is where I should have been trying to grow in the first place. So here's what has changed. A quick recap on Proof of ConceptIf you're new to the Proof of Concept series, here's what you need to know: after hitting 20,000 followers on LinkedIn, I wanted to get back to experimenting with content the way I used to, before I knew what "worked." The first platform I tested was Threads. My goal was to grow from 366 followers to 1,000 by December 15, 2025, organically, through consistency and curiosity. I'm at 824 today. Not quite 1,000, but close enough that I'd probably have gotten there if I kept going. But I'm not trying to close that gap anymore because I'm no longer sure it's the right platform to test this on. So why am I pivoting? A few things happened at once. The first was that Threads is faster than I'm used to from LinkedIn. On LinkedIn, I'm writing for people scrolling between meetings. The pacing is slower, and a post can afford to wind up before it lands. Threads is the opposite: the posts that travel there are reactive and short. I was constantly translating my thinking into a format I hadn't built real muscle for yet, which meant I was figuring out what good looked like in real time while also trying to grow. The second was that I didn't have enough fluency with the platform to form a good hypothesis in the first place, never mind test one. A Proof of Concept works best when you already have enough intuition for a platform that you can actually test something. I was still learning Threads. That's a different project. And then the third thing happened, which I didn't expect: Instagram started working. One of my cross-posted videos hit 110K views, and suddenly I had an actual recipe for virality on a platform I could play with. That got me excited about Instagram in a way I hadn't been about Threads in a while. And at some point, I had to be honest with myself that the energy was pulling me somewhere else. I'm not done with Threads. I still have a profile, I still post, and I'm still curious about it. But it's time for a pivot. Why I'm pivoting to Instagram (the platform I've avoided for 10 years)For years, there was a voice in the back of my head going, "Whoa, the people who actually know you are going to see this." That's a bit embarrassing to type. But it's the truth. I made my Instagram in 2016, back when the platform was still mostly doing what it originally promised: a place to stay in touch with friends and family and keep your people updated on your life. That's how I used it, and for a long time, that's all I wanted it to be. When people started turning Instagram into a creator platform, I felt a lot of resistance. I couldn't fully explain it at first, but I recently figured it out. My LinkedIn, TikTok, and Threads accounts all came after I was deep into my career and already sold on the idea of becoming a creator. Those were creator accounts from day one. Instagram was the only profile I had from before any of that, and the people who followed me there were people who actually knew me: friends, family, people from school, old coworkers, the girl I met in the bathroom at a bar, and so on. So the thought of posting creator content into that feed made me shy, in a genuine way I didn't expect. I also had a mental block about what starting on Instagram would have to look like. I assumed I'd need to build a whole new profile, or if I kept this one, I'd have to unroot ten years of personal history to make it "on-brand." Either way, it had to be a production. Turns out, it didn't. What actually happenedI made a couple of videos for LinkedIn: one for my Buffer work anniversary, a couple about what I do at Buffer, and one about where to find remote jobs. I was experimenting with hooks on screen, different video styles, all the things I'd been telling other people to try. Then I thought, why not crosspost them to Instagram and TikTok? So I did. Cross-posting the videos on TikTok helped me pass 1,000 followers, which was nice. But Instagram was where something actually happened. One of the videos hit 110,000 views, and I grew from about 1,200-ish to 2,319 followers in a matter of weeks. And it all happened because I was finally applying all the advice I had been sharing with other people. I'm Instagram-native, I just haven’t been acting on itI've been on Instagram for a decade. I know what a good reel looks like, how to identify trending sounds, and what to put in a photo dump to make it nonchalant. None of that came from (just) observing and studying creators, but from being a regular user for ten years, building intuition I never thought of as a skill. That's different from where I was on Threads. On Threads, I was still figuring out the language. On Instagram, I already speak it. I just hadn't been using it to say anything. Which changes what this next Proof of Concept is testing. The Threads hypothesis was whether consistency and curiosity could get me to 1,000 followers organically. The Instagram hypothesis is more interesting to me: what happens when you lean into a platform you're already fluent in? For other creators reading this: the platform you're avoiding might be the one you'd grow on fastest. The resistance you feel toward it is probably what's kept you from noticing how much you already know. My plan for InstagramI've set some rules for this pivot to Instagram that are purposefully looser than the ones I had for Threads. I'm leaning on intuition for the creative calls and on the proven tactics from Sabreen's growth playbook for the mechanics. Speaking of, you can check out Sabreen's full list of recommendations here — she's grown her own account past 15K followers and Buffer's past 100K, so she knows what she's talking about. I'm borrowing from it liberally. The goalI want to grow from 2,319 followers to 5,000 by the end of Q2 2026. That's about 10 weeks from the day this article publishes, and yes, it's a bigger jump than my Threads goal of 1,000 (from 366), but if I'm right about the Instagram-native thing, the growth rate should reflect that. The posting cadenceSabreen's data-backed recommendation is 3–5 posts per week, which grows followers 2x faster than posting 1–2 times. That's the cadence I'm aiming for. But I'm not setting a daily minimum, and I'm not going to feel bad about skipping days. I want this round to feel like play, not a checklist. If I hit 3 posts a week while staying in play mode, I'll take it. The approachIf I see a trend I want to try, I'll try it the same day. If I have an idea, I'll create it in whatever format feels right and figure out after the fact what landed. I'm using the full format mix Sabreen recommends: Reels for reach (they get 36% more reach than other post types), carousels for engagement (they drive 12% more engagement), and photo dumps when the mood strikes. Kirsti, who's also been growing on Instagram, gave me one more recommendation I'm taking to heart: "I now always create two slightly different versions of the same content and post them as trial reels 24 hours before I want them to go live. Then I see which one performs the best. My theory is that trial reels force Instagram to push your content to people who may actually be interested in the specific reel, rather than just your followers. It's a great way to help you reach the audience you want rather than the audience you have." I'm definitely going to try her approach, which she's also expanded on in this article. Trial reels also let me experiment without committing a post to my main grid, which fits the "treat this like play" energy I want. The content pillars (for now)The topics I'm leading with: what I've learned growing on LinkedIn, how I've landed brand partnerships there, remote work advice, and my career journey. I'm leaning on what I already know works. These are topics my existing audience trusts me on, so they're my on-ramp to Instagram. I know I can't lean on them forever. Instagram is not LinkedIn, and eventually, I want to figure out what my pillars are native to the platform. But right now, this is what's resonating, so I'm going to ride it until something more Instagram-specific clicks into place. Engagement as a growth tacticOne more from Sabreen's list that I'm making a real habit: replying to comments. Her data shows it boosts engagement by 21%, and more importantly, it's how you turn first-time viewers into people who actually stick around. For an account that's been mostly passive for 10 years, this is the shift that will feel the most different. What success looks like (beyond the follower count)Hitting 5,000 would be great. But I'm defining a few other wins for myself too: Building Instagram-native fluency while my topics still lean LinkedIn-adjacentMost of what I talk about, i.e., remote work, job hunting, career stuff for early-to-mid career professionals, doesn't look or feel like the Instagram content in my feed. It's not the fashion, food, or wellness content that dominates my feed. I want to see if I can make my topics Instagram-shaped without stripping the substance out of them. If the 110K-view video is any indication, the answer might be yes. But one video isn't a pattern, so we’ll see. Build visual taste as a skillOn LinkedIn and Threads, the craft is mostly writing. Instagram is a different craft. Pacing, framing, color, sound, all things I haven't had to think much about on the other platforms. I've always had a good eye for aesthetics, I just didn't point it at my own content. I'm curious whether leaning into that changes how I think about my content on the other platforms, too. How much of this is the platform, and how much of it is me finally trying?This is the one I don't have a clear answer for. I've been on Instagram for 10 years and never tried. It's possible that any platform would start to work once I actually showed up for it, but I'll know for sure in a few months. And then there's the thing I'm not going to hype up just yet, so let’s keep this between us: if I hit 5,000 followers, the next Proof of Concept is probably about turning an Instagram audience into income. Brand partnerships, specifically. I've done a handful of paid partnerships on LinkedIn over the last two years, but Instagram is where the real brand partnership money lives. I want to see what it looks like to build it intentionally, as a smaller creator, and share the numbers. But that's a story for another article. More resourcesEverything I’m Trying to Grow to 1,000 Followers on ThreadsI Reached 20,000 Followers on LinkedIn and I Feel Weird About ItView the full article
  21. We may earn a commission from links on this page. One million years ago (sometime before 2020), Peloton had a series of Bike classes designed around heart rate zone training. Christine D’Ercole would tell you what zone your heart rate should've been in for each part of the workout, and you’d adjust your effort accordingly. Those classes are long gone, but Peloton is dipping a toe back into the world of heart rate training with its new “Zone 2” collection. Peloton Exercise Cross Training Bike, Indoor Stationary Spin Bike for Home Workouts, Space-Friendly Exercise Equipment for Cardio and strength Workout & Indoor Cycling $1,695.00 at Amazon Shop Now Shop Now $1,695.00 at Amazon Peloton's collections are just groupings of existing classes, so there aren't (yet?) any classes that are designed around heart rate zones. Instead, if you tap the “Zone 2” collection on your Bike, Tread, or Row, or in the phone app, you’ll see 16 Zone 2-ish classes, including: Four cycling classes, including two 60-minute Power Zone Endurance rides and two shorter Power Zone Recovery rides. Eight “Tread + Outdoor” classes, about half of which are walks and half are runs. You can do these either on a treadmill, or outdoors with your phone in your pocket. Four Row classes, all labeled as Endurance Row and ranging from 20 to 45 minutes. What it’s like to take one of Peloton's Zone 2 classes Credit: Beth Skwarecki I tested out one of the cycling classes—the 45-minute Power Zone Recovery Ride with pro cyclist Christian Vande Velde. Power Zone training is no relation to heart rate zones. Instead of watching your heart rate, the instructor cues you to pedal hard enough to match one of seven power zones that are based on how much mechanical power you are putting into the pedals. Normally, Power Zone workouts range from zone 1 to 5, with Power Zone Max classes peaking in the higher zones. Power Zone Endurance rides (PZE) are at the other end of the spectrum, with most of the class spent in zones 2 and 3. The two Power Zone Endurance rides in the Zone 2 collection are notable for being lower intensity than most other PZEs. Instead of bouncing between power zones 2 and 3, you’re in power zone 2 the whole time. The Power Zone Recovery rides are even easier: you bounce between power zones 1 and 2. I hooked up my trusty heart rate chest strap to both my Peloton Bike and to my Coros watch, and took the class. We spent the first 15 minutes in zone 1, then a few short segments in zone 2 (while standing up out of the saddle!) with long zone 1 sections between. If this doesn’t sound like much of a workout, you’re right—Christian emphasized that “this is not training. This is recovery from your training.” What is the purpose of Zone 2 classes on Peloton? Christian’s statements during the class made me wonder if people might find this type of workout to be a bait-and-switch. If you listen to the fitness influencers, we should all be doing more—maybe all—of our cardio in heart rate zone 2. So what do you mean these classes aren’t training? Truthfully, I get it: Heart rate zone 2 is a pretty low intensity of exercise. It’s a great low-stress addition to your training routine, especially if you’re trying to increase the number of miles you run or hours you train. But if you’re training to get fitter, you need intensity! Heart rate zone 3 has plenty of benefits, and the VO2max-boosting Norwegian 4x4 workout does its magic in heart rate zone 4. I could definitely see myself reaching for the Zone 2 collection when I want a recovery day or an easier version of an endurance day. But I’d still stick with the regular PZE classes for a more standard endurance workout. Do Peloton’s Zone 2 classes actually put you in zone 2? Left to right: Peloton, Coros, Garmin. All are using data from the same ride. (Coros recorded a little bit of my stretching session afterward, which is why the average HR is different on that one.) Credit: Beth Skwarecki Besides checking out the class design and intensity level, my other reason for trying one of these classes was to see whether my heart rate actually reached, and stayed in, zone 2 while taking it. Coospo H6M Bluetooth/ANT+ Heart Rate Monitor Whether it succeeded depends on whose definition of zone 2 you’re using—because apps disagree. If you connect a heart rate monitor to your Peloton equipment or app, you’ll get Peloton’s five heart rate zones, which define zone 2 as being 65% to 75% of your maximum heart rate. On the other hand, my Coros watch has six zones, with zone 2 being 50% to 60% of my max heart rate. For what it’s worth, my average heart rate was 122, which is around 60% of my max. Coros tells me I spent 39% of my time in the “warm up” zone (zone 2) and 43% in the “fat burn” zone (zone 3). Peloton says I spent 65% of my time in zone 1, and 31% in zone 2. If I were using a device like a Fitbit or Pixel Watch, I would have been split pretty evenly between “moderate” and “vigorous” (low and medium, in a three-zone scale). If I were using an Apple Watch, I would have been split between zone 1 and zone 2. Garmin is the “winner” here, in a sense—it’s the only system that has me in zone 2 for the majority (57%) of the ride, with 23% in zone 1 and 15% in zone 3. (To get those numbers, I used the Peloton-to-Garmin sync.) Watching my heart rate on the Peloton screen (with a paired chest strap), I noticed that most of the time when I was told to pedal in power zone 1, my heart rate was near the top end of heart rate zone 1. On the intervals, I found that standing up spiked my heart rate into zone 3 pretty quickly, but that if I did the intervals while seated, my heart rate didn’t go above zone 2. In part that’s because standing up is less efficient (so you work harder for the same output), but I don’t think that’s the only reason. Heart rate reflects more than just your effort during an exercise; it can also change with body position (standing versus sitting) and other factors, like how warmed-up you are, the temperature of your room you’re in, and more. Which is why cyclists prefer power zones to heart rate zones, in general—power is a more direct measurement of what you’re doing on the bike. View the full article
  22. We keep reading that high quality content is important, but what actually is it? Research suggests the answer is not so clear cut. The post Does AI Actually Reward Quality Content? appeared first on Search Engine Journal. View the full article
  23. This month, Anthropic announced that it had built an AI model so powerful it couldn’t be released to the public. Claude Mythos had autonomously discovered thousands of critical security vulnerabilities across all major operating systems and web browsers. Anthropic chose to make the model available only to a consortium of technology companies, giving them an opportunity to patch vulnerabilities and strengthen defenses before models with similar capabilities inevitably fall into the hands of those who would exploit them. This development shines a light on the potential future dangers that the rapid evolution of AI models brings with it. These kinds of powerful models will proliferate, and their spread will create an escalating need for governance policies rooted in the principles of responsible AI. The practice of responsible AI aims to ensure that as AI systems grow more powerful, they remain fair, explainable, and subject to human oversight—governed by ethical principles and accountable structures that protect the people those systems affect. Responsible AI is not something businesses can set aside for the moment and hope to implement in the future. Every AI system deployed without an adequate governance framework creates reputational, legal, and operational risk right now. Those risks will only compound over time. And the dangers are not only technical. A recent survey of 750 CFOs projects roughly 500,000 AI-related job losses in 2026 alone. Responsible AI must account for the societal impact of these systems, not just the operational risks they pose to the organizations that deploy them. Three pillars of responsible AI Ethical foundations. An AI use policy—a list of what people can and cannot do with AI tools—feels concrete and actionable. But a use policy sits downstream from the values that it formalizes. Before developing specific policies, the first thing you will need is clarity about what your organization stands for: the principles that will both guide policies and shape immediate decisions when technological advances blow past current guidelines. Accountability and oversight. Responsible AI fails when nobody owns it. You need clear answers to key governance questions: Who can approve an AI deployment? Who can halt one? And who is accountable to the board when something goes wrong? Organizational accountability is a vital starting point but it is not enough on its own. You’ll also need frontline safeguards that keep humans meaningfully in the decision-making loop, especially when it comes to matters of safety and enduring consequences. Human impact. Every AI deployment affects real people—people whose work changes, who lose their jobs, whose options are shaped by algorithmic decisions, and whose opportunities expand or contract in accordance with the scope of the new models. A responsible AI approach means being thoughtful and deliberate about the human effects of deployment, and actively designing for fairness, dignity, and human augmentation rather than replacement. The 90-day plan that follows is built on these three pillars. Days 1-30: Map The temptation with any governance initiative is to start building immediately. Resist that impulse. The first 30 days of this plan focus on mapping your AI landscape. In most organizations, the AI footprint is significantly larger, more fragmented, and less governed than leadership believes. 1. Map your AI landscape. Inventory every AI system used by the organization or that touches the organization in a significant way, including through “shadow use” of unsanctioned AI systems by employees. In most cases, the number will be significantly higher than leadership initially expects. For each use case, document what the AI does, what data it uses, who it affects, and who is responsible for its governance. 2. Force the worst-case conversations. For every AI use case you identify, ask your leadership team: What’s the worst-case scenario here? This approach is based on the catastrophize step of the CARE framework for AI risk management; the worst-case scenario is deliberately named to provoke the right mindset. The disciplined practice of imagining catastrophic failure aims to surface risks that would otherwise go unnoticed. 3. Triage. In some cases, the risks you uncover won’t be able to wait for you to develop a polished governance infrastructure. If the mapping and catastrophizing processes reveal that an AI system is making consequential decisions with no oversight, no explainability, and no clear owner—escalate the problem immediately. Pause the use of the system or place it under close human review. You don’t need a complete governance framework to act on an obvious risk. 4. Diagnose your culture. None of the governance structures you are about to build will work if your organizational culture isn’t actively engaged with them. You need to answer one fundamental question: Does your organization treat responsible AI as a business priority or as a compliance box to be checked? If the answer is the latter, a comprehensive culture change initiative will be required. 5. Map your decision rights. You need clear answers to four questions: a. Who can approve a new AI deployment? b. Who decides when a system requires governance review? c. Who can halt a deployment? d. Who can reallocate resources to address a newly identified risk? If the answers are ambiguous, your governance framework will have no teeth—decisions will default to whoever speaks the loudest or moves fastest. In this situation, responsible AI will lose every time. Days 31-60: Build In the second phase, the plan’s focus shifts to building the governance infrastructure that will sustain responsible AI over the long term. 1. Develop your ethical framework. Your ethical framework is the set of foundational principles that will guide every AI decision your organization makes, including the ones the policy hasn’t anticipated yet. It should address your commitments around fairness and nondiscrimination, your position on human oversight and the circumstances under which autonomous AI decision-making is and is not acceptable, your approach to employee impact and workforce augmentation, and your stance on the broader societal effects of AI. 2. Begin building the technical architecture. Governance policies without technical infrastructure are just words. Start putting in place the monitoring and data collection processes that your ethical framework needs to become an operational reality: the ability to track what your AI systems are doing, to detect drift and bias, and to produce the evidence your governance reviews will rely on. This work will not be complete by day 60, but the foundations need to be laid. 3. Establish ownership and structure. If responsible AI is a side responsibility bolted onto someone’s existing role, it will always lose out to the part of their job that is used to assess their success. Someone needs to own responsible AI and governance as an intrinsic part of their actual job. Your organization needs a dedicated person or team with both an enterprise-wide view and the authority to enforce the relevant policies. You’ll also need people in each business unit with the responsibility and authority necessary to turn principles into practical governance on the ground. 4. Design your assessment process. Build a structured, repeatable process for evaluating AI systems against your ethical framework. The assessment should produce a clear risk profile for each system, with defined thresholds that trigger different levels of governance review. Not every AI system needs board-level oversight, but you need a mechanism for determining which ones do, and that mechanism needs to be consistent, documented, and enforceable. 5. Realign incentives. People do what they’re rewarded for. If every incentive in your organization points to the importance of speed and cost reduction above all else, responsible AI will be treated as a source of friction—something to route around rather than a necessary part of the work. Tie a portion of leadership evaluation to responsible AI metrics: risk incidents identified and addressed, governance reviews completed, willingness to halt or modify deployments that don’t meet standards. 6. Begin reviews on your highest-risk systems. As soon as you have your ethical framework and assessment process in workable shape, run your first reviews on the systems that your risk inventory identified as the most exposed. You get two things out of this: real findings about your most urgent risks and an early read on whether the governance infrastructure actually works under pressure. 7. Build your skill development plan. Responsible AI requires capabilities most organizations do not yet have. Your leadership needs to understand AI risk well enough to govern it. Your technical teams need bias detection and human-centered design skills. Your frontline managers need to understand how AI is changing the work their teams do. Your legal and compliance teams need to understand the rapidly evolving regulatory landscape. Design a targeted development program that addresses the most critical gaps and then build its implementation into the governance cadence. Days 61-90: Embed In the last 30-day stretch, the focus shifts to ensuring the system survives contact with the day-to-day pressures of running an organization. 1. Build exit plans. Every AI system in your portfolio should have a defined exit pathway, documented and owned, that shows how to safely shut it down. These are the exit protocols of the CARE framework, and they must to be put in place before you need them. The time to design a shutdown procedure is not in the middle of a crisis. 2. Establish the governance rhythm. Set up a regular meeting with an outline agenda for monitoring and responding to responsible AI issues. This creates a protected space on the calendar for reviewing the risk landscape, surfacing emerging issues, and assessing the health of your governance processes. 3. Embed governance into operations. Responsible AI cannot live as a separate process that runs alongside normal operations—it needs to be woven into them. Every new AI system above a defined risk threshold requires a governance review before deployment. Every existing system requires periodic reassessment. No exceptions. This is where responsible AI stops being a project and starts becoming part of how you operate. 4. Iterate. By day 90, you have live data—use it. Where are the bottlenecks? What’s working well and what isn’t? Is the culture shifting or is it stuck in place? The aim here is to learn from everything you’ve done so far and use these learnings to iterate the next version of your governance engine. Conclusion Claude Mythos is not an anomaly. It’s a preview of the kind of dangerous capabilities AI models will bring with them in the future. The question is not whether your organization will be affected by AI systems of this power. It will. Rather, the question is whether you will have the governance infrastructure in place when they arrive. Any organization can take significant steps toward putting this infrastructure in place in a single quarter. There’s no excuse for not starting today. View the full article
  24. We all call planet Earth home and benefit from having a healthy dwelling place. Earth Day, which is today (Wednesday, April 22), is a great time to reflect on our responsibility to maintain and preserve this sanctuary for future generations. Let’s take a look at the history of the holiday and some of the festivities and demonstrations taking place around the world this year. Who created Earth Day? While it is now a global event, Earth Day was first conceived by Senator Gaylord Nelson of Wisconsin and Representative Pete McCloskey of California, and held on college campuses in the United States in 1970. The men were inspired by the student anti-war protest movement. The book Silent Spring, published in 1962 and written by Rachel Carson, also helped change public consciousness and set the stage for environmental awareness. Was the first Earth Day successful? The first Earth Day was considered a protest or teach-in. Twenty million Americans took to the streets to raise awareness about the dangers of unchecked industrial development. To put that number in context, that was about 10% of the population of the United States at the time. This event’s impact helped pass important legislation, such as the National Environmental Education Act, the Occupational Safety and Health Act, and the Clean Air Act. It also aided in the creation of the Environmental Protection Agency (EPA). What Earth Day events are happening in 2026? Flash forward to the present, and Earth Day is less protest and more celebration, although protests are happening today too. The theme for Earth Day 2026 is “our power, our planet.” This urges participants to focus on small actions that add up to the greater good, such as reducing plastic use and planting trees. This year also emphasizes innovation, especially in renewable energy. There are many events all over the world to help you learn about how to protect and celebrate the planet. In Kyoto, Japan, events last all month long and include markets, yoga, and sustainability workshops. In Kenya, people are teaming up to clean the Nairobi River. In Padova, Italy, scientists Filippo Giorgi and Carlo Buontempo will take part in a free panel discussion on climate change and how cities can adapt. Closer to home in Santa Barbara, California, the festivities at Alameda Park will take place over the weekend on April 25 and 26. The weekend event includes a green car show and music. Which protests are happening? Going back to its activist roots in Washington, D.C., two rallies are being held. The first by XRDC and Third Act took place on April 21 at 10:30 a.m. ET outside Apple Carnegie Library. The event called on leaders to stop building AI data centers because of the negative impact these have on the environment. The following day, the CCAN Action Fund will lead a group at the Wilson Building to help remind the mayor and the D.C. Council to properly fund climate and environment programs. The event starts at 8:15 a.m. ET. View the full article
  25. Google and WooCommerce announced today that the Google for WooCommerce extension now enables merchants to sell products directly through YouTube. The update connects WooCommerce stores to YouTube channels enabling them to tap into 2.7 billion shoppers. Merchants can tag products in videos and Shorts, where they appear as shoppable cards during playback and in a dedicated shopping tab on the channel. The cards are pulled from the merchant’s existing product catalog They stay synced automatically through Google Merchant Center The same data is reused across YouTube, Shopping, and ads Connect WooCommerce Stores To YouTube Shoppers WooCommerce is an open source […] The post WooCommerce Stores Can Now Sell Products Via YouTube Videos appeared first on Search Engine Journal. View the full article
  26. You’ve been told to follow a familiar set of rules for years: always use high-quality creative, keep your brand polished, stay scripted, and follow platform-recommended formats. If you’ve been in ad accounts lately or browsing feeds, you may have noticed something. Attention-grabbing ads don’t always follow those rules. They’re scrappier, less polished, and sometimes even called “ugly ads.” The beauty is that they’re coming out on top. More brands are breaking best practices on purpose to stand out. After all, best practices are an average of what worked best for everyone else in the last six months, give or take. By the time a tactic becomes a platform-recommended rule, the edge has already been sanded off. That’s why breaking best practices works — but only if you understand what’s behind them. Why breaking best practices leads to better-performing ads Before getting into what to change, it helps to understand why the rules exist in the first place. Platforms like Meta and TikTok have a dual incentive: They want you to spend money on advertising. They want users to stay engaged on their platforms. The best practices they promote are designed to create a frictionless experience, pushing ads to look and behave like ads. The problem is that what feels familiar eventually becomes invisible. When you follow the rules too closely, your ads blend into the background noise users have trained themselves to ignore. High-production ads signal “this is an ad” almost instantly, triggering a skip reflex before your hook lands. When your ad looks like something a friend might send, the brain’s defenses stay down just a bit longer, and that can be the difference between a scroll and a conversion. That’s why many of the top-performing ads today don’t look polished or on-brand in the traditional sense. They interrupt patterns instead. Think: Grainy phone footage. Notes app screenshots. Green-screened reaction or commentary videos. Other lo-fi formats are outperforming studio-grade creative. Source: TikTok Ads Manager To apply this, intentionally lower your production value and experiment with formats like point-of-view (POV) shots tailored to different personas. Dig deeper: TikTok ad creative has a shorter shelf life. Here’s how to keep up Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with Founder-led ads: The return of the human Many brands have guidelines designed to make the company look faceless and invincible. They may not want to show a messy, lived-in office, a founder who hasn’t been professionally coached, or anything that breaks a tight, corporate script. But others are tossing that playbook and leaning into founder-led ads that aren’t the polished executive-profile version that was more common. There’s a catch. Rule-breaking only works if it’s authentic. If you fake it, the web will spot it in seconds, and it won’t land the way you expect. We saw this play out in a viral series of videos where McDonald’s CEO appeared in a promotional spot to introduce a new burger. As highlighted in a Dineline video, the execution felt stiff and staged. The CEO carefully lifted the burger, looked into the camera, called it a “product,” and took a small bite from the edge. People online quickly pointed out that it didn’t look like he actually liked the food, so why should consumers? Soon after, Burger King entered the conversation, and its president appeared in one of its kitchens holding a burger with a completely different tone. No hesitation, no corporate pauses — just a big, genuine bite. The lesson is clear: One felt like a product presentation, and the other felt like a real moment. If your leadership, your founder, and your team don’t look genuinely excited about what they’re selling, your customers won’t be either. Rule-breaking should give you the courage to be real, not just “unpolished” for the sake of it. Source: Dineline on YouTube Get the newsletter search marketers rely on. See terms. The comment hook hijack You’ve likely seen — and maybe used — a video hook best practice like “show the product in the first two seconds and state the value prop clearly.” Sound familiar? Your ad starts with a screenshot of a negative comment. Let’s say you have a skincare ad that opens with a text bubble: “This probably smells like old socks, and does it even work?” Your founder then spends the next 15-20 seconds smiling, proving it wrong in an unscripted, unpolished way, while applying the product. Using the platform’s native comment bubble and opening with conflict breaks your brand’s positive-association rule, but you’ll gain attention by tapping into users’ natural tendency to watch a digital argument. By the time viewers realize it’s an ad, they’ve already heard your main points and may be on their way to trying the product. Effective advertising still relies on psychology, but now it requires understanding user behavior and how algorithms work. Source: TikTok Creative Center The rebel’s safety net Don’t delete all your polished assets just yet. Breaking the rules is strategic. When it fails, it’s often because the “80/20 rule” gets overlooked. Shifting your entire budget to shaky phone footage overnight isn’t the move. Maintain a baseline of about 80%, and use the remaining 20% to test new, unconventional ads. Standing out doesn’t mean producing bad advertising. Give these a try in your next test campaign: The silent test: Skip trending audio and run a fully silent ad with large, bold captions. In a noisy feed, silence can interrupt patterns. The UI ghost: Create a static image that looks like a platform notification or a low-battery warning, if relevant. It may annoy some viewers, but it can stop the scroll. The algorithmic trust fall: Turn off auto-optimizations in one campaign and use broad targeting if you aren’t already. Let your ugly creative do the filtering. You may find the algorithm performs better when you remove manual guardrails. Don’t follow the rules, understand them Best practices are a starting point, not a strategy. If you’re going to move beyond them, do it systematically. Start with the rule, understand why it exists, ask whether it still applies, and then test the opposite in a structured way. Compare polished and lo-fi, scripted and unscripted, and brand voice and personal voice. In a feed full of brands playing it safe, those who understand the rules — and how to break them intentionally — are the ones getting attention and conversions. Focus on learning faster than everyone else. Skip the guesswork. View the full article
  27. Pakistan’s fatigue-wearing strongman Asim Munir takes unorthodox approach to mediating between two arch-enemiesView the full article




Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.