All Activity
- Past hour
-
New Yahoo Scout AI Search Delivers The Classic Search Flavor People Miss via @sejournal, @martinibuster
Yahoo Scout offers a clean and uncluttered classic search experience with the power of natural language AI. The post New Yahoo Scout AI Search Delivers The Classic Search Flavor People Miss appeared first on Search Engine Journal. View the full article
- Today
-
EVs just outsold gas cars in Europe for the first time
EVs hit a new milestone: In December, buyers in Europe registered more electric cars than gas cars for the first time. EV registrations hit 217,898 in the EU last month—up 50% year-over-year from 2024. Sales of gas cars, on the other hand, dropped nearly 20% to 216,492. The same trend played out in the larger region, including the UK and other non-EU countries like Iceland. Car buyers have more electric options in Europe than in the U.S., from tiny urban EVs like the $10,000 Fiat Topolino to Chinese cars like the BYD Dolphin. “We’re actually seeing this trend globally, although the U.S. is a different story: as the availability and quality of EVs goes up, sales have been going up as well,” says Ilaria Mazzocco, who studies electric vehicle markets at the Center for Strategic & International Studies. “There’s a story that some of the major OEMs have been pushing that there’s no demand for EVs. But when you look at the numbers…it turns out there’s a lot of latent demand.” Some automakers are doing better than others. Tesla’s market share dropped around 38% last year in Europe as buyers reacted to Elon Musk’s politics. BYD tripled its market share over the same period. EVs made up 17.4% of car sales in the EU last year, around twice the rate in the U.S. That’s still well behind Norway (not part of the EU), where a staggering 96% of all registrations were fully battery-electric in 2025. Hybrid cars are still more popular than pure electric vehicles in the EU, with 34.5% of market share. Diesel cars, which used to dominate in Europe, now only have around 9% of market share. It’s not clear exactly what will happen next as the EU may weaken its EV policy. The bloc had targeted an end to new fossil-fueled cars by 2035; in a proposal in December, it suggested cutting vehicle emissions by 90% instead, leaving more room for hybrid cars. Some of the growth also will depend on how willing European countries are to continue letting cheap Chinese EVs on the market. Still, steep growth in EVs is likely to continue. View the full article
-
TikTok is tracking you now. Here’s how to protect yourself
TikTok’s U.S. operations are now managed by a new American joint venture, ending a long-standing debate over whether the app would be permanently banned in the United States. The good news for TikTok users is that this deal guarantees that the app will continue to operate within America’s borders. But there’s some bad news, too. Successive U.S. administrations—both Biden’s and The President’s—argued that TikTok posed a national security threat to America and its citizens, partly because of the data the app collected about them. While all social media apps collect data about their users, officials argued that TikTok’s data collection was a danger (while, say, Facebook’s was not) because the world’s most popular short-form video app was owned by ByteDance, a Chinese company. The ironic thing is that TikTok will actually collect more data about them now than it did under ByteDance ownership. The company’s new mostly American owners—Larry Ellison’s Oracle, private equity company Silver Lake, and the Emirati investment company MGX—made this clear in a recent update to TikTok’s privacy policy and its terms of service. If this new data collection unnerves you, there are some things you can do to mitigate it. How to stop TikTok’s new U.S. owners from getting your precise location When TikTok’s U.S. operations were still owned by ByteDance, the app did not collect the GPS phone location data of users in the United States. TikTok’s new U.S. owners have now changed that policy, stating, “if you choose to enable location services for the TikTok app within your device settings, we collect approximate or precise location information from your device.” While allowing TikTok—or any social media app—to access your location can mean you see more relevant content from events or creators in your area, there’s no reason that app should need to know your precise GPS location, which reveals where in the world you are down to a few feet. Thankfully, you can block TikTok’s access to your GPS location data by using the settings on your phone. On iPhone: Open the Settings app. Tap Apps. Tap TikTok. Tap Location. Set location access to Never. On Android: Find the TikTok app on your home screen and tap and hold on its icon. Tap the App information menu item from the pop-up. Tap Permissions. Tap Location. Tap “Don’t Allow.” How to limit new targeted advertising When TikTok’s U.S. operations were owned by ByteDance, the company’s terms of service informed users that it analyzed their content to provide “tailored advertising” to them. This was not surprising. TikTok’s main way of generating revenue is via showing ads in the app. But in the updated terms of service posted by TikTok’s U.S. owners, it now appears that TikTok will use the data it collects about you, as well as the data its third-party partners have on you, to target you with relevant ads both on and off the platform. As the new terms of service states, “You agree that we can customize ads and other sponsored content from creators, advertisers, and partners, that you see on and off the Platform based on, among other points, information we receive from third parties.” Unfortunately, as of this writing, TikTok’s new U.S. owners don’t seem to offer a way for U.S. users to disable personalized ads (users in some regions may see the option under Settings and privacy > Ads in the TikTok app). Still, if you have an iPhone, you can at least stop TikTok from tracking your activity across apps and websites using iOS’s App Tracking Transparency feature, which allows users to quickly block an app from tracking what they do on their iPhone outside of the app. Open the Settings app on your iPhone. Tap Privacy and Security. Tap Tracking. In the list of apps that appears, make sure the toggle next to TikTok is set to off (white). Currently, Android does not offer a feature like Apple’s App Tracking Transparency. TikTok’s U.S. owners track your AI interactions Like most social media apps, TikTok has been slowly adding more AI features. (One, called AI Self, lets users upload a picture of themselves and have TikTok turn it into an AI avatar). As Wired previously noted, TikTok’s new U.S. owners have now inserted a new section in the privacy policy informing users that it may collect and store any data surrounding your “AI interactions, including prompts, questions, files, and other types of information that you submit to our AI-powered interfaces, as well as the responses they generate.” That means anything you upload to use in TikTok’s AI features—or prompts you write—could be retained by the company. Unfortunately, there’s no internal TikTok app setting, or any iPhone and Android app setting that lets you get around this TikTok AI data collection. That means TikTok’s U.S. users only have one choice if they don’t want the app’s new U.S. owners to collect AI data about them: Don’t use TikTok’s AI features. View the full article
-
Why Yann LeCun left Meta, and what it means for AI’s next frontier
When one of the founders of modern AI walks away from one of the world’s most powerful tech companies to start something new, the industry should pay attention. Yann LeCun’s departure from Meta after more than a decade shaping its AI research is not just another leadership change. It highlights a deep intellectual rift about the future of artificial intelligence: whether we should continue scaling large language models (LLMs) or pursue systems that understand the world, not merely echo it. Who Yann LeCun is, and why it matters LeCun is a French American computer scientist widely acknowledged as one of the “Godfathers of AI.” Alongside Geoffrey Hinton and Yoshua Bengio, he received the 2018 Association for Computing Machinery’s A.M. Turing Award for foundational work in deep learning. He joined Meta (then Facebook) in 2013 to build its AI research organization, eventually known as FAIR (Facebook/META Artificial Intelligence Research), a lab that tried to advance foundational tools such as PyTorch and contributed to early versions of Llama. Over the years, LeCun became a global figure in AI research, frequently arguing that current generative models, powerful as they are, do not constitute true intelligence. What led him to leave Meta LeCun’s decision to depart, confirmed in late 2025, was shaped by both strategic and philosophical differences with Meta’s evolving AI focus. In 2025, Meta reorganized its AI efforts under Meta Superintelligence Labs, a division emphasizing rapid product development and aggressive scaling of generative systems. This reorganization consolidated research, product, infrastructure, and LLM initiatives under leadership distinct from LeCun’s traditional domain. Within this new structure, LeCun reported not to a pure research leader, but to a product and commercialization-oriented chain of command, a sign of shifting priorities. But more important than that, there’s a deep philosophical divergence: LeCun has been increasingly vocal that LLMs, the backbone of generative AI, including Meta’s Llama models, are limited. They predict text patterns, but they do not reason or understand the physical world in a meaningful way. Contemporary LLMs excel at surface-level mimicry, but lack robust causal reasoning, planning, and grounding in sensory experience. As he has said and written, LeCun believes LLMs “are useful, but they are not a path to human-level intelligence.” This tension was compounded by strategic reorganizations inside Meta, including workforce changes, budget reallocations, and a cultural shift toward short-term product cycles at the expense of long-term exploratory research. The big idea behind his new company LeCun’s new venture is centered on alternative AI architectures that prioritize grounded understanding over language mimicry. While details remain scarce, some elements have emerged: The company will develop AI systems capable of real-world perception and reasoning, not merely text prediction. It will focus on world models, AI that understands environments through vision, causal interaction, and simulation rather than only statistical patterns in text. LeCun has suggested the goal is “systems that understand the physical world, have persistent memory, can reason, and can plan complex actions.” In LeCun’s own framing, this is not a minor variation on today’s AI: It’s a fundamentally different learning paradigm that could unlock genuine machine reasoning. Although Meta founders and other insiders have not released official fundraising figures, multiple reports indicate that LeCun is in early talks with investors and that the venture is attracting attention precisely because of his reputation and vision. Why this matters for the future of AI LeCun’s break with Meta points to a larger debate unfolding across the AI industry. LLMs versus world models: LLMs have dominated public attention and corporate strategy because they are powerful, commercially viable, and increasingly useful. But there is growing recognition, echoed by researchers like LeCun, that understanding, planning, and physical reasoning will require architectures that go beyond text. Commercial urgency versus foundational science: Big Tech companies are understandably focused on shipping products and capturing market share. But foundational research, the kind that may not pay off for years, requires a different timeline and incentives structure. LeCun’s exit underscores how those timelines can diverge. A new wave of AI innovation: If LeCun’s new company succeeds in advancing world models at scale, it could reshape the AI landscape. We may see AI systems that not only generate text but also predict outcomes, make decisions in complex environments, and reason about cause and effect. This would have profound implications across industries, from robotics and autonomous systems to scientific research, climate modeling, and strategic decision-making. What it means for Meta and the industry Meta’s AI strategy increasingly looks short-term, shallow, and opportunistic, shaped less by a coherent research vision than by Mark Zuckerberg’s highly personalistic leadership style. Just as the metaverse pivot burned tens of billions of dollars chasing a narrative before the technology or market was ready, Meta’s current AI push prioritizes speed, positioning, and headlines over deep, patient inquiry. In contrast, organizations like OpenAI, Google DeepMind, and Anthropic, whatever their flaws, remain anchored in long-horizon research agendas that treat foundational understanding as a prerequisite for durable advantage. Meta’s approach reflects a familiar pattern: abrupt strategic swings driven by executive conviction rather than epistemic rigor, where ambition substitutes for insight and scale is mistaken for progress. Yann LeCun’s departure is less an anomaly than a predictable consequence of that model. But LeCun’s departure is also a reminder that the AI field is not monolithic. Different visions of intelligence, whether generative language, embodied reasoning, or something in between, are competing for dominance. Corporations chasing short-term gains will always have a place in the ecosystem. But visionary research, the kind that might enable true understanding, may increasingly find its home in independent ventures, academic partnerships, and hybrid collaborations. A turning point in AI LeCun’s decision to leave Meta and pursue his own vision is more than a career move. It is a signal: that the current generative AI paradigm, brilliant though it is, will not be the final word in artificial intelligence. For leaders in business and technology, the question is no longer whether AI will transform industries, it’s how it will evolve next. LeCun’s new line of research is not unique: Other companies are following the same idea. And this idea might not just shape the future of AI research—it could define it. View the full article
-
Keir Starmer insists he will take ‘pragmatic’ approach during Beijing trip
Four-day visit to China overshadowed by concerns about human rights and spy threatsView the full article
-
How K-12 schools are left on their own to develop AI policies
Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning. But as schools seek to navigate into the age of generative AI, there’s a challenge: Schools are operating in a policy vacuum. While a number of states offer guidance on AI, only a couple of states require local schools to form specific policies, even as teachers, students, and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, “You have policy and what’s actually happening in the classrooms—those are two very different things.” As part of my lab’s research on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the association’s members reflects how education policy is typically formed through dynamic interactions across national, state, and local levels, rather than being dictated by a single source. But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technology’s spread, including student safety, data privacy, and negative impacts on student learning. They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: “What happens when a student deepfakes my voice and sends it out to cancel school or report a bomb threat?” At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority. Local actions dominate Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are providing guidance or tool kits, or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans. When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. “We are a ‘local control’ state, so some school districts have banned [generative AI],” wrote one respondent. “Our [state] department of education has an AI tool kit, but policies are all local,” wrote another. One shared that their state has a “basic requirement that districts adopt a local policy about AI.” Like other education policies, generative AI adoption occurs within the existing state education governance structures, with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role. Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which will take years to become more clear. That lag adds to the challenges in formulating policies. States as a lighthouse However, state policy can provide vital guidance by prioritizing ethics, equity, and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and to construct guardrails. As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a Rand-led panel of educators showed that teachers and principals in higher-poverty schools were about half as likely to report that AI guidance was provided. The poorest schools are also less likely to use AI tools. When asked about foundational generative AI policies in education, policymakers focused on privacy, safety, and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators. And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, “What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids’ first teachers.” Introducing new technology According to a Feb. 24, 2025, Gallup poll, 60% of teachers report using some AI for their work in a range of ways. Our survey also found there is “shadow use of AI,” as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval. Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing, as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority. In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students’ writing. “Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids.” This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools. One initiative from the Netherlands offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research. Core principles One theme that emerged from survey respondents is the need to emphasize ethical principles in providing guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output, and ethically disclose its use. Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational context—or what my colleagues and I called “dilemma analysis” in a recent report—is an approach schools, districts, and states can take to navigate the myriad of ethical and societal impacts of generative AI. Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district, and state to engage their communities and families to co-create a path forward. As one policymaker put it: “Knowing the horse has already left the barn [and that AI use] is already prevalent among students and faculty . . . [on] AI-human collaboration versus an outright ban, where on the spectrum do you want to be?” Janice Mak is an assistant director and clinical assistant professor at Arizona State University. This article is republished from The Conversation under a Creative Commons license. Read the original article. View the full article
-
The rise of weather influencers
“Snow Will Fall Too Fast for Plows,” “ICE STORM APOCALYPSE,” and “Another Big Storm May Be Coming …” were all headlines posted on YouTube this past weekend as the biggest snowstorm in years hit New York City. These videos, each with tens or hundreds of thousands of views, are part of an increasingly popular genre of “weather influencers,” as Americans increasingly turn to social media for news and weather updates. People pay more attention to influencers on YouTube, Instagram, and TikTok than to journalists or mainstream media, a study by the Reuters Institute and the University of Oxford found in 2024. In the U.S., social media is how 20% of adults get their news or weather updates, according to the Pew Research Center. It’s no surprise, then, that a number of online weather accounts have cropped up to cover the increasing number of extreme weather events in the U.S. While some of these influencers have no science background, many of the most popular ones are accredited meteorologists. One of the most viewed digital meteorologists—or weather influencers—is Ryan Hall, who calls himself “The Internet’s Weather Man” on his social media platforms. His YouTube channel, Ryan Hall, Y’all, has more than 3 million subscribers. Max Velocity is another. He’s a degreed meteorologist, according to his YouTube bio, who has 1.66 million followers. Reed Timmer, an “extreme meteorologist and storm chaser,” also posts to 1.46 million subscribers on YouTube. “While most prefer to avoid the bad news that comes with bad weather, I charge towards it,” Timmer writes in the description section on his channel. The rising popularity of weather influencers is stemming not just from a mistrust in mainstream media—which is lingering at an all-time low—but also from an appetite for real-time updates delivered in an engaging way to the social-first generation. YouTube accounts like Hall’s will often livestream during extreme weather events, with his comments section hosting a flurry of activity. There’s even merch. Of course, influencers are not required to uphold the same reporting standards as network weathercasters. There’s also the incentive, in terms of likes and engagement, to sensationalize events with clickbait titles and exaggerated claims, or sometimes even misinformation, as witnessed during the L.A. wildfires last year. Still, as meteorologists navigate the new media landscape, the American Meteorological Society now offers a certification program in digital meteorology for those “meteorologists who meet established criteria for scientific competence and effective communication skills in their weather presentations on all forms of digital media.” While we wait to see whether another winter storm will hit the Northeast this weekend, rest assured, the weather influencers will be tracking the latest updates. View the full article
-
ASML shares hit record high as AI demand fuels orders
Dutch chipmaking equipment group predicts ‘significant increase’ in sales this yearView the full article
-
Gold climbs to new record after slide in dollar
Metal has gained more than 20 per cent this year amid geopolitical volatility and questions over US currencyView the full article
-
Starmer blasts ‘toxic’ politics of Reform UK’s by-election candidate
Prime minister launches attack on Matt Goodwin ahead of high-stakes poll next monthView the full article
-
Why agentic AI belongs on every CEO’s 2026 roadmap
You know the ancient proverb: Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime. For leaders, first-generation AI tools are like giving employees fish. Agentic AI, on the other hand, teaches them how to fish—truly empowering, and that empowerment lifts the entire organization. According to recent findings from McKinsey, nearly eight in ten companies report using gen AI, yet about the same number report no bottom-line impact. Agentic AI can help organizations achieve meaningful results. AI agents are highly capable assistants with the ability to execute tasks independently. Equipped with artificial intelligence that simulates human reasoning, they can recognize problems, remember past interactions, and proactively take steps to get things done—whether that means knocking out tedious manual tasks or helping to generate innovative solutions. For CEOs juggling numerous responsibilities, agentic AI can be a powerful ally in simplifying decision-making and scaling impact. That’s why I believe it belongs on every CEO’s roadmap for 2026. As CEO of a SaaS company grounded in automation, I’ve made it a priority to incorporate agentic AI into our everyday workflows. Here are three ways you can put it to work in your organization. 1. Take the effort out of scheduling Starting with one of the most basic functions of any organization—and one that can easily become a time and energy vacuum—scheduling is perfect fodder for AI agents. And they go well beyond your typical AI-powered scheduling tool. For starters, they’re adaptable. AI agents can monitor incoming data and requests, proactively adjust schedules, and notify the relevant parties when issues arise. Let’s say your team has a standing brainstorming session every Wednesday and a new client reaches out to request an intro meeting at the same time. Your agent can automatically respond with alternative time slots. On the other hand, if a client needs to connect on a time-sensitive issue, your agent can elevate the request to a human employee to decide whether rescheduling makes sense. You can also personalize AI agents based on your unique needs and priorities, including past interactions. If, for example, your agent learns that you religiously protect time for deep-focus work first thing in the morning, it won’t keep proposing meetings then. By delegating scheduling tasks, organizations—from the CEO to interns—free up time for higher-level priorities and more meaningful work. You can build your own agent, or get started with a ready-to-use scheduling assistant that offers agentic capabilities, like Reclaim.ai. 2. Facilitate idea generation and innovation When we talk about AI and creativity, the conversation often stirs anxiety about artificial intelligence replacing human creativity. But agentic AI can help spark ideas for engagement, leadership development, and strategic initiatives. The goal is to cultivate the conditions in which these initiatives can thrive, not to replace the actual brainstorming or strategic thinking. For example, you can create an ideation-focused AI agent and train it on relevant organizational context—performance data, KPIs, meeting notes, employee engagement data, culture touch points, and more. Your agent can continuously gather new information and update its internal knowledge. When the time comes for a brainstorming or strategy session (which the agent can also proactively prompt), it can draw on this working organizational memory plus any other resources it can access, and tap generative AI tools like ChatGPT or Gemini to generate themes, propose topics, and help guide the discussion. Meanwhile, leaders remain focused on evaluating ideas, decision-making, and execution. 3. Error-free progress updates and year-end recaps While generative AI can be incredibly powerful, the issue remains that it is largely reactive, not proactive. When it comes to tracking performance, team KPIs, and organizational progress, manual check-ins are still required. As I’ve written before, manual tasks are subject to human error. Calendar alerts go unnoticed. Things slip through the cracks. Minor problems become big issues. One solution is to design an AI agent that can autonomously monitor your organization’s performance. Continuous, real-time oversight helps ensure processes run smoothly and that issues are flagged as soon as they arise. For example, if your company sells workout gear and sees a post–New Year surge in fitness resolutions—and demand for a specific product—an agent can track sales patterns and alert the team to inventory shortages. An AI agent can also independently generate reports, including year-end recaps that are critical for continued growth. Rather than waiting to be prompted by a human, they can do the work alone and elevate only the issues that require human judgment. Agents have the potential to create real value for organizations. Importantly, leaders have to rethink workflows so AI agents are meaningfully integrated, fully liberating employees from rote, manual tasks and freeing them to focus on more consequential, inspiring work like strategy and critical thinking. I’ve found this leaves employees more energized, and the benefits continue to compound. View the full article
-
SpaceX weighs June IPO timed to planetary alignment and Musk’s birthday
Celestial calendar meets high finance, as billionaire’s personal impulses shape plans to raise $50bn in record listingView the full article
-
ECB would need to act if euro keeps gaining, says Austria’s central bank governor
Martin Kocher untroubled by single currency’s current level but says further appreciation could drive down import prices View the full article
-
Swiss franc surges to decade high as traders seek last ‘reliable’ haven
Alpine currency at strongest level since 2015 shock appreciation, putting pressure on central bankView the full article
-
The reality of a world after rupture
Europe has a key role to play in building a successor to the US-led global orderView the full article
-
Google scuppers service comparing YouTube viewing with TV and streaming audiences
‘Cease and desist’ letter forces Barb and Kantar to halt measurement service just months after its launchView the full article
-
Tether scores $5bn windfall as gold price rockets
Stablecoin company owns at least 116 tonnes of bullion, making it one of biggest winners from blistering rallyView the full article
-
Investors bet on ‘hot’ US economy heading into midterm elections
Stocks rise with inflation expectations as fund managers anticipate more stimulus despite strong growth View the full article
-
How private equity’s pioneer in tapping retail money lost its edge
From Switzerland, Partners Group built a $185bn business by serving individual investors. Bigger US rivals have the market in their sightsView the full article
-
The rise of the ‘National Health State’
As patients and residents struggle to access deteriorating public services in England, NHS trusts are stepping inView the full article
-
7 Effective User Satisfaction Survey Examples to Enhance Feedback
User satisfaction surveys are vital tools for gathering feedback. They help you understand how well your products or services meet customer needs. By employing effective survey techniques, like Likert scale and open-ended questions, you can capture both quantitative and qualitative data. This information is critical for making informed decisions. In the following sections, you’ll discover practical examples and best practices that can greatly improve your survey efforts and elevate overall satisfaction. Key Takeaways Use Likert scale questions to measure satisfaction levels on a range of experiences effectively. Incorporate open-ended questions to allow users to express their thoughts and needs freely. Include close-ended questions for quick quantitative analysis of customer loyalty and engagement. Segment feedback by adding demographic questions to tailor improvements to specific user groups. Implement skip logic to create a smoother survey experience and respect respondents’ time. Understanding User Satisfaction Surveys User satisfaction surveys are vital tools that help businesses understand customer experiences and identify areas for improvement. These structured questionnaires gather feedback on various interactions, aiming to measure overall satisfaction. You’ll often encounter user satisfaction survey examples that utilize key metrics like the Customer Satisfaction Score (CSAT) and the Net Promoter Score (NPS). The CSAT gauges how satisfied customers are, whereas the NPS assesses their likelihood of recommending your product or service. When crafting your surveys, consider incorporating a mix of quantitative and qualitative user experience survey questions. For instance, rating scales can provide measurable data, and open-ended questions can offer deeper insights. It’s important to keep your questions clear and straightforward to encourage higher response rates. Remember, timing matters; conducting surveys shortly after customer interactions yields the most relevant and actionable feedback, helping you improve the overall user experience effectively. Key Components of Effective Surveys When designing effective surveys, it’s crucial to incorporate clear and straightforward questions that align with your primary objectives. This guarantees you gather actionable feedback. A mix of formats—like multiple-choice, Likert scale, and open-ended questions—can improve engagement and provide a thorough comprehension of user sentiments. When crafting your UX research questions, aim for neutrality to avoid bias, guaranteeing that responses accurately reflect user experiences without leading them toward specific answers. Implementing skip logic likewise tailors the respondent experience, allowing participants to answer only relevant questions based on their previous responses, thereby improving completion rates. Moreover, analyzing results systematically helps identify trends in user feedback. This analysis can inform strategic improvements and elevate overall user satisfaction. Examples of User Satisfaction Survey Questions Crafting effective user satisfaction survey questions involves a thoughtful combination of question types to elicit valuable feedback. Start with Likert scale questions, such as, “How satisfied are you with our product on a scale of 1 to 5?” This allows you to gauge user experiences more accurately. Incorporate open-ended questions like, “What features would you like to see improved?” to gather qualitative insights, revealing specific user needs. Include close-ended questions, such as, “Would you recommend our service to a friend? Yes/No,” to facilitate quick quantitative analysis of customer loyalty. Don’t forget demographic questions, like, “What is your age group?” or “What is your profession?” as these help you segment feedback for targeted improvements. Finally, consider follow-up questions, such as, “Why did you rate us a 3 out of 5?” to encourage users to elaborate on their experiences, providing deeper insights into their satisfaction levels. Best Practices for Survey Design Effective survey design is essential for gathering meaningful feedback, as it directly influences the quality of the data collected. To guarantee your survey is effective, consider these best practices: Prioritize Clarity and Simplicity: Use straightforward language and avoid jargon. This helps respondents easily understand your questions, leading to more accurate feedback. Align with Specific Goals: Focus your questions on the objectives you want to achieve. This relevance improves the quality of the data you collect, making it more actionable. Incorporate Varied Question Formats: Use a mix of Likert scales and open-ended questions. This combination captures both quantitative and qualitative insights, providing a thorough view of user satisfaction. Before distributing your survey, test your questions for clarity to identify any potential confusion. Utilizing skip logic can likewise streamline the experience, respecting participants’ time and increasing completion rates. Analyzing Survey Results for Actionable Insights Analyzing survey results is crucial for turning raw data into actionable insights that can drive improvements in user satisfaction. Start by systematically reviewing quantitative data, like satisfaction scores, to identify trends and patterns that inform your strategic decisions. Utilize statistical tools to interpret response distributions, allowing you to pinpoint areas of strength and weakness in customer experiences. Don’t overlook qualitative feedback from open-ended responses; categorize and summarize it to uncover common themes and specific suggestions for improvement. Regularly compare survey results over time, tracking changes in customer sentiment and measuring the effectiveness of the improvements you’ve implemented. Finally, share your findings with relevant stakeholders to promote transparency. This collaboration encourages collective efforts to improve user satisfaction based on the actionable insights you’ve gathered. By focusing on both quantitative and qualitative data, you can make informed decisions that truly resonate with your users. Timing and Delivery of User Satisfaction Surveys Timing and delivery of user satisfaction surveys play an essential role in gathering meaningful feedback. When you send surveys immediately after customer interactions, you improve the relevance of the feedback you collect. Here are three key considerations to keep in mind: Specific Triggers: Utilize specific events, like post-purchase or after customer service interactions, to gather accurate insights on user experience. Frequency Matters: Be mindful of how often you send surveys; too many can lead to survey fatigue, whereas too few might skip vital feedback opportunities. Delivery Methods: Choose the right delivery method—whether via email, in-app prompts, or SMS—based on user preferences to maximize engagement. Additionally, A/B testing different delivery times and channels can help you identify the most effective combinations, ensuring you elicit valuable user feedback while respecting their time and preferences. Continuous Improvement Through Feedback Gathering feedback is crucial for businesses aiming to refine their offerings and improve customer experiences. Continuous improvement through feedback means regularly collecting customer insights to pinpoint strengths and weaknesses in your products and services. By implementing feedback mechanisms, like user satisfaction surveys, you can capture real-time data on customer experiences, allowing for timely adjustments. When you actively respond to feedback, you promote a culture of continuous improvement, showing customers their opinions matter in decision-making. Data from these surveys can highlight specific areas needing improvement, enabling you to prioritize changes that greatly affect customer satisfaction and retention. Companies that effectively utilize feedback often increase customer loyalty, as about 70% of consumers are more likely to remain with a brand that actively seeks and acts on their input. Therefore, embracing feedback not only aligns your offerings with user needs but likewise strengthens your customer relationships. Frequently Asked Questions What Are Good Survey Questions for Feedback? Good survey questions for feedback should be clear and specific. You might ask respondents to rate their satisfaction on a scale from 1 to 10. It’s effective to include a mix of question types, like Likert scale questions for attitudes and open-ended questions for detailed insights. Focus on individual interactions, such as “How easy was it to navigate our website?” This approach helps guarantee you get relevant, actionable feedback from users. Can You Give an Example of Improving Customer Satisfaction? To improve customer satisfaction, you could implement a Customer Satisfaction Score (CSAT) survey immediately after service interactions. By analyzing feedback, you identify specific pain points and areas needing improvement. For instance, if customers express dissatisfaction with response times, you can streamline your processes. Furthermore, personalizing follow-up questions can increase engagement, providing deeper insights. Making changes based on this feedback can lead to noticeable increases in overall customer satisfaction and loyalty. What Are the 3 C’s of Customer Satisfaction? The three C’s of customer satisfaction are Consistency, Communication, and Care. Consistency means delivering reliable service and quality products every time, which builds trust. Communication involves listening to customer feedback and providing timely information about products or services, strengthening relationships. Care refers to genuinely addressing customers’ needs and concerns, which greatly boosts satisfaction. Together, these elements form a solid foundation for positive customer experiences, enhancing loyalty and encouraging repeat business. What Is the 5 Point Scale for Customer Satisfaction Survey? The 5-point scale for customer satisfaction surveys allows you to rate your experience from “Very Dissatisfied” to “Very Satisfied.” Each point corresponds to a numerical value, with 1 being the lowest and 5 the highest. This scale includes a neutral option, helping capture ambivalence. It’s user-friendly, which often increases response rates and completion. Conclusion In summary, effective user satisfaction surveys are essential for gathering valuable feedback. By combining various question types, such as Likert scale and open-ended questions, you can gain a thorough comprehension of user experiences. Implementing best practices in survey design and timing improves response rates and quality. Analyzing the results allows you to derive actionable insights that drive continuous improvement. In the end, leveraging this feedback can greatly improve customer satisfaction and strengthen your overall service or product offering. Image via Google Gemini This article, "7 Effective User Satisfaction Survey Examples to Enhance Feedback" was first published on Small Business Trends View the full article
-
7 Effective User Satisfaction Survey Examples to Enhance Feedback
User satisfaction surveys are vital tools for gathering feedback. They help you understand how well your products or services meet customer needs. By employing effective survey techniques, like Likert scale and open-ended questions, you can capture both quantitative and qualitative data. This information is critical for making informed decisions. In the following sections, you’ll discover practical examples and best practices that can greatly improve your survey efforts and elevate overall satisfaction. Key Takeaways Use Likert scale questions to measure satisfaction levels on a range of experiences effectively. Incorporate open-ended questions to allow users to express their thoughts and needs freely. Include close-ended questions for quick quantitative analysis of customer loyalty and engagement. Segment feedback by adding demographic questions to tailor improvements to specific user groups. Implement skip logic to create a smoother survey experience and respect respondents’ time. Understanding User Satisfaction Surveys User satisfaction surveys are vital tools that help businesses understand customer experiences and identify areas for improvement. These structured questionnaires gather feedback on various interactions, aiming to measure overall satisfaction. You’ll often encounter user satisfaction survey examples that utilize key metrics like the Customer Satisfaction Score (CSAT) and the Net Promoter Score (NPS). The CSAT gauges how satisfied customers are, whereas the NPS assesses their likelihood of recommending your product or service. When crafting your surveys, consider incorporating a mix of quantitative and qualitative user experience survey questions. For instance, rating scales can provide measurable data, and open-ended questions can offer deeper insights. It’s important to keep your questions clear and straightforward to encourage higher response rates. Remember, timing matters; conducting surveys shortly after customer interactions yields the most relevant and actionable feedback, helping you improve the overall user experience effectively. Key Components of Effective Surveys When designing effective surveys, it’s crucial to incorporate clear and straightforward questions that align with your primary objectives. This guarantees you gather actionable feedback. A mix of formats—like multiple-choice, Likert scale, and open-ended questions—can improve engagement and provide a thorough comprehension of user sentiments. When crafting your UX research questions, aim for neutrality to avoid bias, guaranteeing that responses accurately reflect user experiences without leading them toward specific answers. Implementing skip logic likewise tailors the respondent experience, allowing participants to answer only relevant questions based on their previous responses, thereby improving completion rates. Moreover, analyzing results systematically helps identify trends in user feedback. This analysis can inform strategic improvements and elevate overall user satisfaction. Examples of User Satisfaction Survey Questions Crafting effective user satisfaction survey questions involves a thoughtful combination of question types to elicit valuable feedback. Start with Likert scale questions, such as, “How satisfied are you with our product on a scale of 1 to 5?” This allows you to gauge user experiences more accurately. Incorporate open-ended questions like, “What features would you like to see improved?” to gather qualitative insights, revealing specific user needs. Include close-ended questions, such as, “Would you recommend our service to a friend? Yes/No,” to facilitate quick quantitative analysis of customer loyalty. Don’t forget demographic questions, like, “What is your age group?” or “What is your profession?” as these help you segment feedback for targeted improvements. Finally, consider follow-up questions, such as, “Why did you rate us a 3 out of 5?” to encourage users to elaborate on their experiences, providing deeper insights into their satisfaction levels. Best Practices for Survey Design Effective survey design is essential for gathering meaningful feedback, as it directly influences the quality of the data collected. To guarantee your survey is effective, consider these best practices: Prioritize Clarity and Simplicity: Use straightforward language and avoid jargon. This helps respondents easily understand your questions, leading to more accurate feedback. Align with Specific Goals: Focus your questions on the objectives you want to achieve. This relevance improves the quality of the data you collect, making it more actionable. Incorporate Varied Question Formats: Use a mix of Likert scales and open-ended questions. This combination captures both quantitative and qualitative insights, providing a thorough view of user satisfaction. Before distributing your survey, test your questions for clarity to identify any potential confusion. Utilizing skip logic can likewise streamline the experience, respecting participants’ time and increasing completion rates. Analyzing Survey Results for Actionable Insights Analyzing survey results is crucial for turning raw data into actionable insights that can drive improvements in user satisfaction. Start by systematically reviewing quantitative data, like satisfaction scores, to identify trends and patterns that inform your strategic decisions. Utilize statistical tools to interpret response distributions, allowing you to pinpoint areas of strength and weakness in customer experiences. Don’t overlook qualitative feedback from open-ended responses; categorize and summarize it to uncover common themes and specific suggestions for improvement. Regularly compare survey results over time, tracking changes in customer sentiment and measuring the effectiveness of the improvements you’ve implemented. Finally, share your findings with relevant stakeholders to promote transparency. This collaboration encourages collective efforts to improve user satisfaction based on the actionable insights you’ve gathered. By focusing on both quantitative and qualitative data, you can make informed decisions that truly resonate with your users. Timing and Delivery of User Satisfaction Surveys Timing and delivery of user satisfaction surveys play an essential role in gathering meaningful feedback. When you send surveys immediately after customer interactions, you improve the relevance of the feedback you collect. Here are three key considerations to keep in mind: Specific Triggers: Utilize specific events, like post-purchase or after customer service interactions, to gather accurate insights on user experience. Frequency Matters: Be mindful of how often you send surveys; too many can lead to survey fatigue, whereas too few might skip vital feedback opportunities. Delivery Methods: Choose the right delivery method—whether via email, in-app prompts, or SMS—based on user preferences to maximize engagement. Additionally, A/B testing different delivery times and channels can help you identify the most effective combinations, ensuring you elicit valuable user feedback while respecting their time and preferences. Continuous Improvement Through Feedback Gathering feedback is crucial for businesses aiming to refine their offerings and improve customer experiences. Continuous improvement through feedback means regularly collecting customer insights to pinpoint strengths and weaknesses in your products and services. By implementing feedback mechanisms, like user satisfaction surveys, you can capture real-time data on customer experiences, allowing for timely adjustments. When you actively respond to feedback, you promote a culture of continuous improvement, showing customers their opinions matter in decision-making. Data from these surveys can highlight specific areas needing improvement, enabling you to prioritize changes that greatly affect customer satisfaction and retention. Companies that effectively utilize feedback often increase customer loyalty, as about 70% of consumers are more likely to remain with a brand that actively seeks and acts on their input. Therefore, embracing feedback not only aligns your offerings with user needs but likewise strengthens your customer relationships. Frequently Asked Questions What Are Good Survey Questions for Feedback? Good survey questions for feedback should be clear and specific. You might ask respondents to rate their satisfaction on a scale from 1 to 10. It’s effective to include a mix of question types, like Likert scale questions for attitudes and open-ended questions for detailed insights. Focus on individual interactions, such as “How easy was it to navigate our website?” This approach helps guarantee you get relevant, actionable feedback from users. Can You Give an Example of Improving Customer Satisfaction? To improve customer satisfaction, you could implement a Customer Satisfaction Score (CSAT) survey immediately after service interactions. By analyzing feedback, you identify specific pain points and areas needing improvement. For instance, if customers express dissatisfaction with response times, you can streamline your processes. Furthermore, personalizing follow-up questions can increase engagement, providing deeper insights. Making changes based on this feedback can lead to noticeable increases in overall customer satisfaction and loyalty. What Are the 3 C’s of Customer Satisfaction? The three C’s of customer satisfaction are Consistency, Communication, and Care. Consistency means delivering reliable service and quality products every time, which builds trust. Communication involves listening to customer feedback and providing timely information about products or services, strengthening relationships. Care refers to genuinely addressing customers’ needs and concerns, which greatly boosts satisfaction. Together, these elements form a solid foundation for positive customer experiences, enhancing loyalty and encouraging repeat business. What Is the 5 Point Scale for Customer Satisfaction Survey? The 5-point scale for customer satisfaction surveys allows you to rate your experience from “Very Dissatisfied” to “Very Satisfied.” Each point corresponds to a numerical value, with 1 being the lowest and 5 the highest. This scale includes a neutral option, helping capture ambivalence. It’s user-friendly, which often increases response rates and completion. Conclusion In summary, effective user satisfaction surveys are essential for gathering valuable feedback. By combining various question types, such as Likert scale and open-ended questions, you can gain a thorough comprehension of user experiences. Implementing best practices in survey design and timing improves response rates and quality. Analyzing the results allows you to derive actionable insights that drive continuous improvement. In the end, leveraging this feedback can greatly improve customer satisfaction and strengthen your overall service or product offering. Image via Google Gemini This article, "7 Effective User Satisfaction Survey Examples to Enhance Feedback" was first published on Small Business Trends View the full article
-
How Trump was forced to back off his harshest immigration tactics
The US president has hemmed in hardliners and softened his rhetoric after the shooting of Alex PrettiView the full article
-
UK’s deadline for paying IHT on pensions poses ‘huge problem’, peers warn
Government should extend time to pay from six to 12 months, says Lords committeeView the full article
- Yesterday
-
Why homeowners insurance rates could stabilize in 2026
Rates actually declined or remained flat over a two-year period in 15 states, including Florida, with natural disasters and tariffs affecting 2026's movements. View the full article