All Activity
- Past hour
-
Oil surges above $90 a barrel for first time in Iran war
Traders are bracing for a longer conflict and further production shutdownsView the full article
-
Why Luckin Coffee, Starbucks’ biggest competitor, wants to buy Blue Bottle
The Chinese coffee giant Luckin is reportedly acquiring the third wave coffee mecca Blue Bottle in a deal worth just shy of $400 million. It’s more than another acquisition: Luckin is making its most aggressive move on Starbucks since it opened its first U.S. locations in New York in 2025 in a rivalry that is quickly heating up. But to understand what’s at play, we need to zoom out for a moment to take a quick scan of the global coffee market. Inside the coffee wars With around 40,000 stores and $37 billion in revenue, Starbucks is the biggest coffee company in the world. While it’s had a few stagnant years, its all-star CEO Brian Niccol has been staging a design-led turnaround, in which cozier cafes and a protein-laden menu have siren-called customers back with some early success. Luckin, a company controlled by the Chinese private equity firm Centurium Capital, is its only sizable challenger—which grew its global footprint by a hyper aggressive 39% in 2025 to reach around 31,000 stores. Luckin is in some ways the antithesis of Niccol’s Starbucks. The stores are smaller footprint, emphasizing digital ordering. They will also gladly operate at a loss to unlock new markets—all while Starbucks has been closing its underperforming stores. (Luckin has reportedly seized this moment to actually buy some old Starbucks locations—undoubtedly hoping to swap someone’s daily Starbucks run for their own brand.) Neither of these companies is operating in a vacuum, though. A slew of smaller challengers are eating the coffee market. You’ll find 12,000 Dunkin’s globally, and other chains including Tim Hortons, Dutch Bros, Scooter’s, and Blank Street, none of which break the four figures. Each of these brands is finding a most certain appeal with consumers, ranging from pumping out relatively inexpensive giant iced coffees to offering simple drinks with a minimal decor to serving up desserts disguised as coffee straight of a drive-thru window. But none of them is really good coffee, if we’re being honest. They all lack the third wave coffee vibe where single origin pour overs still rule, where spending over $10 for a cup is far from rare. On one hand, perhaps the third wave coffee market matters less than we think these days. Blue Bottle’s 140 stores globally aren’t profitable. Starbucks closed its own high-end “reserve” stores in 2025 admitting a failed strategy to woo people to (even) fancier coffee. We live in the age of iced coffee and matcha anyway (60% of drinks from Starbucks are sold on ice these days). On the other hand? One report suggests that Centurium Capital is already talking to malls in China, scoping closed Starbucks Reserve stores that might fit a Blue Bottle. In other words, Luckin sees an opportunity to own the next tier of coffee snobbery by leveraging Blue Bottle has a bonafide and distinct premium sub brand. Luckin can stay Luckin—it can be the best in convenient coffee—while Blue Bottle becomes its reserve identity. Is that it for the story? So does this mean Luckin played the game better than Starbucks? Not so fast. There’s a strange, third party twist in the story where the real winner here may be Nestlé—which by some measures is the real close-second coffee company in the world. Coffee is one of the top categories for its $115 billion business—representing $32 billion in sales last year—just $5 billion shy of Starbucks. Nestlé bought its majority stake in Blue Bottle for $425 million back in 2017 (eventually buying out the full company for an estimated $700 million) back when third wave coffee shops were consolidating, and big budget cold brew was hitting grocery store shelves. It left Blue Bottle stores running with relative independence, while it made Blue Bottle a shining star of its rich, at-home portfolio. Nestlé owns—wait for it—the biggest instant coffee brand in the world with Nescafé (instant coffee was a $42 billion industry in 2023, by the way, and is growing). It also owns Seattle’s Best, Coffee Mate (those creamers), and rights for Starbucks dry prepackaged coffee, pods, and instant offerings. (PepsiCo handles the premade Starbucks drinks you buy from the store in a 50/50 split with Starbucks.) As part of Blue Bottle’s sale to Centurium Capital, it appears Nestlé retained the entire grocery store side of Blue Bottle. So it seemingly took a $300 million loss, dumped off the management of unprofitable high end coffee shops while retaining their cachet on the shelf. Nestlé doesn’t report revenue on Blue Bottle store products, so we have no idea how long that $300 million will take to recoup, but we do know their Starbucks line was pulling in about $2 billion in revenue a year way back in 2018. While Blue Bottle would be vastly smaller, Nestlé stands to recoup its loss and even see gains in the long term if Centurium Capital makes Blue Bottle cafes a bigger deal. But in the short term, did anyone win from the Blue Bottle acquisition? We might not know for a while. It all depends on where Luckin takes the brand, how Starbucks responds, and whether all those millennials who made third wave coffee a thing will even notice. We have reached out to Blue Bottle and Luckin to verify reports of sale and will update the story with any details as they come. View the full article
- Today
-
The time change in the U.S. this weekend is a problem, and there’s no consensus on how to fix it
Clocks will skip ahead an hour at 2 a.m. Sunday for daylight saving time in most of the U.S., creating a 23-hour day that throws off sleep schedules, plunges early-morning dog walks into darkness and inspires millions of complaints. Even though polls show most people dislike the system that has most Americans changing clocks twice a year, the political moves necessary to change the system haven’t succeeded because opinions on the issue and its potential impacts are sharply divided. Want to make daylight saving time permanent? That would mean the sun rises around 9 a.m. in Detroit for a while during the winter. Prefer staying on standard time year round? That would mean the sun would be up at 4:11 a.m. in Seattle in June. “There’s no law we can pass to move the sun to our will,” said Jay Pea, the president of Save Standard Time, an organization devoted to switching to standard time for good. Here’s a look at the debate. Imposing a clock on a rotating planet causes a lot of headaches Genie Lauren spends her winters in New York City keeping an eye on the sunrise and sunset “white-knuckling it” until the sun is up late enough for her to feel like doing anything outside her apartment after work — even going to the movies. “The majority of the year we’re in daylight savings time,” said the 41-year-old health care worker. “What are we doing this for?” The U.S. has tinkered with the clock intermittently since railroads standardized the time zones in 1883. So has a lot of the world. About 140 countries have had daylight saving time at some point; about half that many do now. About 1 in 10 U.S. adults favor the current system of changing the clocks, according to an AP-NORC poll conducted last year. About half oppose that system, and some 4 in 10 didn’t have an opinion. If they had to choose, most Americans say they would prefer to make daylight saving time permanent, rather than standard time. A dilemma for policymakers Since 2018, 19 states — including much of the South and a block of states in the northwestern U.S. — have adopted laws calling for a move to permanent daylight saving time. There’s a catch: Congress would need to pass a law to allow states to go to full-time daylight saving time, something that was in place nationwide during World War II and for an unpopular, brief stint in 1974. The U.S. Senate passed a bill in 2022 to move to permanent daylight saving time. A similar House bill hasn’t been brought to a vote. U.S. Rep. Mike Rogers, a Republican from Alabama who introduces such a bill every term, said the airline industry, which doesn’t want the scheduling complexity a change would bring, has been a factor in persuading lawmakers not to take it up. U.S. Rep. Greg Steube, a Florida Republican, is proposing another approach. “Why not just split the baby?” he asked. “Move it 30 minutes so it would be halfway between the two.” Steube thinks his bill could get bipartisan support. The change would make the U.S. out of sync with most of the world — though India has taken a similar approach and in Nepal, the time is 15 minutes ahead of India. Sleep experts prefer more daylight in the morning Karin Johnson, the vice president of the advocacy group Save Standard Time and a professor of neurology at the University of Massachusetts Chan Medical School, said permanent standard time — with the sun straight overhead close to noon — would help students, drivers and practically everyone else function better year-round. “Morning light is what’s really critical for setting our circadian rhythms each day,” she said. Kenneth Wright, a professor and director of the Sleep and Chronobiology Laboratory at the University of Colorado, said the risk of fatal vehicle crashes, heart attacks and strokes increases in the days that follow turning the clock forward. “Based on the evidence for our health and well-being and safety, the best option for us as a country now is to choose to go to permanent standard time,” he said. Obstacles block change Of all U.S. states, only Arizona — except the Navajo Nation — and Hawaii currently opt out of daylight saving time. In the last two years, half a dozen states have adopted bills to switch to permanent standard time in one legislative chamber, including Virginia in February. A Virginia House committee this week recommended dropping the issue until 2027. Most of those measures included caveats that the change would only take effect if neighboring states also made the move. For instance, Virginia would go to standard time only if Maryland and Washington, D.C., do, too. That could partially answer some of the concerns from groups including broadcasters who warn of schedule confusion. It wouldn’t solve the concerns of the golf industry, which opposes full-time standard time because that would make it harder for people to get in a round in the evening. Many full-time daylight time bills have similar provisions. A call to make states decide Scott Yates, a Colorado man who runs the website Lock the Clock, wants the federal government to pass a law to end the twice-a-year clock change in two years. Under his plan, states would have to commit to either daylight saving or standard time. As long as the clock changes persist, Yates has some advice. “If you’re the boss, tell all your employees on Monday that they can come in an hour later,” he said. “And if you aren’t the boss, tell your boss that you think you should come in an hour later on Monday. Sleep in for safety.” Associated Press writer David A. Lieb contributed. —Geoff Mulvihill, Associated Press View the full article
-
Alysa Liu’s hometown skating rink tells a surprising story about cities
It could have easily become a high-rise luxury condo complex. Or maybe a struggling office tower now being converted into luxury condos. Maybe a parking garage, or a data center. But instead, 30 years ago this spring, Alameda County Parcel Number 8-641-8-5 became home to the Oakland Ice Center—where recently-crowned Olympic gold-medalist figure skater Alysa Liu still trains. Located just north of downtown Oakland, in what the city considers the Uptown Retail and Entertainment Area, parcel 8-641-8-5 was just a vacant, privately-owned lot back in 1991. But in that year, Oakland’s now-defunct Redevelopment Agency acquired it as part of a three-parcel transaction for $1.8 million. The Bay Area was a hot spot for ice sports in the early 1990s. Mountain View’s Brian Boitano had won a figure skating gold medal at the 1988 Winter Olympics. Fremont’s Kristi Yamaguchi was on her way to figure skating gold in the 1992 Winter Olympics. After a brief flirtation with the NHL’s Minnesota North Stars moving to Oakland (the team infamously moved to Dallas in 1993), the Bay Area finally got its first NHL team in the San Jose Sharks, who dropped the puck for their inaugural season in the fall of—you guessed it—1991. Alysa Liu Oakland City Council Members came to believe an ice sports center was just what they needed to revitalize a struggling downtown. The eight other ice sports facilities in the Bay Area were over-booked with youth and adult hockey leagues as well as figure skaters of all ages training, twirling and competing. Projections came in that a new ice center would bring in 500,000 visitors annually to downtown Oakland, generating nearly $5 million a year in retail, food and lodging revenue. So in April 1995, Oakland’s Redevelopment Agency signed a ground lease with a private developer team to build and operate the facility, which the agency financed with $11 million in tax-exempt bonds. Those projections were way off, of course. The private developer team went belly-up just three months after the Oakland Ice Center opened in March 1996. It would take more than a decade and three changes in private operators to stabilize the Oakland Ice Center. The parent company of the San Jose Sharks, which still manages the facility today, took over in October 2007—when Alysa Liu was just 26 months old. The City of Oakland now owns the Oakland Ice Center. But the community investment program that enabled this center’s development has been dissolved: The state of California contentiously eliminated its 400-plus local redevelopment agencies in 2012 as part of closing a $26 billion state budget deficit. While budget hawks and accountability groups praised the move, it meant eliminating specialized public entities that created redevelopment plans, funded local infrastructure improvements, assembled parcels, assisted developers, brokered deals and sold tax-exempt bonds to pay for all the above. California’s redevelopment agencies had their flaws and missteps, but planners and community development leaders across the state say no entity has truly filled the gap they left, both as long-term stewards of publicly-owned land and sources of local public dollars dedicated to local economic and real estate development. And so the ecosystem that created Alysa Liu’s home rink—and shielded it from the pressures of the market until it could find its footing—no longer exists. Complicated roots At the time California’s redevelopment agencies were dissolved in 2012, they were recipients of $5.6 billion a year in property tax revenues. Enough for Next City to label them “America’s Biggest Redevelopment Program.” The story of California’s redevelopment agencies begins in 1945, when state lawmakers passed the Community Redevelopment Act. The legislation gave cities and counties the authority to establish redevelopment agencies (or RDAs) as independent, publicly-affiliated entities with a mission to eliminate blight through development, reconstruction, and rehabilitation of residential, commercial, industrial, and retail districts. Those agencies were supercharged after Congress passed the Housing Act of 1949. Title I of that legislation infamously created “Slum Clearance” powers that allowed cities across the country to declare entire neighborhoods as “slums” and offered federal loans and grants to bulldoze them and make way for private developers to rebuild. To access those federal loans and grants, local governments needed to come up with their own matching funds. In 1951, California passed new legislation that provided RDAs with matching dollars via the nation’s first “tax-increment financing” scheme. With tax-increment financing, also known as TIF, cities or counties designate an area or sometimes a single property as “blighted” and in need of new investment. Upon designation, the existing amount of property taxes paid to the local government (as well as to the school district, parks district, transportation district or other local government bodies) is frozen within that area. Over time, if property values within the designated area rise, any property taxes assessed above the frozen amount are set aside to subsidize redevelopment projects or fund other eligible activities within the designated area. Fueled by Title I Slum Clearance and their new TIF dollars, California RDAs went right to work, using eminent domain to demolish cherished homes and neighborhoods wholesale in the name of “urban renewal.” The project that incited James Baldwin to re-dub urban renewal as “negro removal” was in fact a project involving the San Francisco RDA bulldozing a huge chunk of the Fillmore District, a predominantly-Black enclave in San Francisco. Oakland created its RDA in 1956. Its first large-scale project involved bulldozing the 34-acre Acorn neighborhood, home to around 500 primarily low-income families (78% African American, 20% Mexican American, and 2% white) living in some 600 dwellings. But it wasn’t as simple as RDAs being wielded only to destroy Black neighborhoods and hand them over to white developers and contractors. In the aftermath of Acorn’s 1962 destruction, John B. Williams became the head of Oakland’s RDA in 1964—making him among the first Black people to head a city agency in the United States. A baptist preacher born in Covington, Georgia, Williams also had a fine arts degree and helped found First Enterprise Bank, the first minority-owned bank in Northern California. According to Places Journal, with his fine arts background he supported art as a means to engage community members in the agency’s work. He was the first Oakland official to enforce minority training and hiring policies, and required that the agency employ laborers and award contracts proportionate to city demographics. Williams led Oakland’s RDA until he died of cancer in 1976. Complicated demise Since proliferating across the country, TIF schemes differ from state-to-state, and they go by many names. In Texas, it’s known as a Tax Increment Reinvestment Zone, or TIRZ. Florida calls it Community Redevelopment Area, or CRA. Back in 2018, Chicago infamously had around 150 TIF districts, as many as the next nine largest U.S. cities combined, according to a study of TIFs by the Lincoln Institute of Land Policy. For local public officials, TIF can seem like a magical way for redevelopment to pay for itself. Cities can borrow dollars up front, based on projected future TIF area property tax payments, then use those dollars to do almost anything they want—like build the Oakland Ice Center. If all goes as planned, property tax revenues will then be collected within the TIF area repay the debt automatically as time goes by. TIF schemes also vary greatly in how decisions get made about what projects to finance or which properties to acquire for redevelopment. Not all TIF schemes create an RDA-like entity that can acquire properties. In Chicago, TIF districts don’t have a separate governing entity, only separate bank accounts whose dollars are ultimately doled out by the city’s Department of Planning and Development, which is really controlled by the mayor. In Texas and Florida, each TIRZ or CRA has its own board of commissioners that oversees an entity that controls its dollars, acquires properties and sets up partnerships with private developers. Back in California, each city or county established an RDA with the power to designate multiple TIF areas, acquire properties and spend TIF dollars on projects located in the designated areas where the dollars came from. City and county legislators had the flexibility to control RDAs directly themselves or create an appointed commission to wield RDA powers. Since it derives revenue from local property taxes, TIF is often seen as pulling money away from schools, fire departments, parks, libraries and other local public services usually supported by local property taxes. TIF projects also don’t often require direct approval from mayors, city councils or voters, so TIF dollars also often end up being used as a slush fund to support local politicians’ pet projects that happen to be developed by their biggest campaign donors. For these and other reasons, TIF continues to be a hot button issue in places like Chicago or St. Louis. Ultimately, it was the TIF funding mechanism that led to the demise of California’s RDAs. When former Oakland Mayor Jerry Brown came into office as California governor in 2011, he inherited a $26 billion state budget deficit from the Governator. Although Brown had been a huge beneficiary of Oakland’s RDA during his time as mayor, the RDAs suddenly became sacrificial lambs to help close that giant hole. Under the state laws governing RDAs, the state was obligated to pay local school districts for any revenues lost to tax-increment financing. The state, he argued, could no longer afford those payments. At the time, RDAs accounted for 12% of all property taxes paid across California; in some places, they earned more property tax revenue than the local city or county government that created them. Cities, counties and RDAs fought back vehemently. Gov. Brown first tried eliminating them by executive order. When that didn’t work, the state passed legislation that the RDAs and local governments later fought in court. The state emerged victorious, leading to the dissolution of RDAs in 2012. More than money Losing RDAs has meant losing more than just funding for local economic and real estate development across California. While many of the decisions they made were questionable or arguably malicious, each RDA over time came to build its own internal capacity for wielding land and money in ways that always had the potential to reflect the best of public interest. And that capacity that has never really been replaced. Helen Leung is the executive director of LA Más, a nonprofit fighting against real estate speculation in Northeast Los Angeles, where she was born and raised. She previously worked as a planning and land use staffer for former L.A. city council member Eric Garcetti, who held that office from 2001-2013 before becoming mayor. “It was fascinating to see how much money and land the redevelopment agency had access to, how much power it had to put together giant economic development projects,” Leung tells Next City. “Projects took a long time but they were also catalytic and had community benefits or contributions that weren’t possible outside the redevelopment agency area or without redevelopment agency investment.” Things have changed for planners and local officials attempting to revitalize their cities. “All the things we do now to require things like prevailing wages on projects or inclusionary housing was just done deal-by-deal by the redevelopment agency,” she says. “I can appreciate that power as someone with a planning background and who used to work for local government — but I can also understand the fear or skepticism of big agencies with a lot of power and the ability to move fast.” While they had the power to move fast, as public entities RDAs also had the ability to be patient when warranted. After the Oakland Ice Center’s original developers went belly-up, Oakland’s Redevelopment Agency was able to step in quickly and take ownership of the facility, keeping it open as it searched for a new private partner to operate it. The second manager it picked ended up having political ties as a campaign contributor. They were gone after three years. The third manager it picked only signed a two-year lease, but stayed on month-to-month for five more years as the facility continued to lose money. It wasn’t till 2007 that Oakland’s Redevelopment Agency finally found a partner—the corporate parent of the San Jose Sharks—who was able to work out a sustainable business model for the facility. Models for this sort of dedicated, long-term stewardship of real estate by public or quasi-public independent entities have shown long-term success in other places, most notably Seattle. In 1973, the Seattle City Council created the Pike Place Preservation and Development Authority to steward the landmark eponymous public market, which the city previously tried to convert into a parking garage. Created in 1974, Historic Seattle stewards a citywide portfolio of historic cultural venues. Created in 1975, the Seattle Chinatown-International District Preservation and Development Authority stewards a growing portfolio of properties in its eponymous neighborhood. Around 20 such entities operate in and around Seattle, including the Social Housing Public Development Authority, created in 2023 to acquire and build a citywide portfolio of mixed-income housing. The new social housing development authority shows that it’s not necessary to fund redevelopment entities using TIF schemes, either: It’s funded by a 5% tax on local employee salaries of $1 million or more. The tax netted $115 million in its first year, far exceeding projections. The success of California’s redevelopment agencies varied greatly from city to city, sometimes TIF area by TIF area, within a single redevelopment agency. There’s also more than one way to define or measure success: A neighborhood where RDA-supported projects succeed in catalyzing new private investment without RDA support may also be targeted for speculative investment that displaces the very people who were supposed to benefit from their own property tax dollars being invested locally. “Redevelopment agency projects also gentrified some communities,” Leung says. “Hollywood looks a lot different now than it did back then. Everyone you talk to about redevelopment agencies will have lots of pros and cons, whether they’re in the weeds or not in the weeds.” This story was originally published by Next City, a nonprofit news outlet covering solutions for equitable cities. Sign up for Next City’s newsletter for their latest articles and events. View the full article
-
Break down data silos: How integrated analytics reveals marketing impact
Do you think you’re able to answer the question every marketing leader dreads hearing from leadership: “Why isn’t our marketing effort doing more?” How do you even go about answering that? Let’s look at what I mean using a fictional location analytics company we’ll call Acme Area Analytics. The Acme team reviews its reports. Nothing appears broken. Campaigns are running, leads are still coming in, and performance metrics are mostly stable. Yet sales momentum isn’t clearly accelerating, and it’s hard to pinpoint why. Insights are scattered across site analytics, brand monitoring and SEO tools, CRM systems, and paid media dashboards. Each platform reflects part of the story, but none shows the full picture. That fragmentation is exactly how well-intentioned “data-driven decisions” can go wrong. Let’s look at how that happens and how Acme, and you, can fix it. When the data points in the wrong direction In global, multi-channel campaigns like Acme Area Analytics’, the hardest moments are when nothing is obviously underperforming. Digital channels are running. Leads are coming in, and metrics are mostly stable, yet sales momentum is stalled and it’s unclear which lever to pull next. At the same time, subtle signals raise concerns. Non-brand CPCs are creeping upward, and a competitor — Spotter Intelligence — is suddenly appearing more frequently in branded search. Let’s say you’re part of the Acme marketing team. You go back to your reports and ask the question most marketers ask in this situation: Which tactic is underperforming? When diving into the platform data, you uncover what looks like a clear answer: remarketing performance for your API has softened, conversion rates have dipped slightly, and efficiency has begun to decline. On the surface, you have your answer. Spend should be pulled back to match demand because audiences have likely seen the creative too many times. That decision could certainly make sense, and it’s what many teams actually end up doing. But it’s also often wrong. Why? Because you haven’t yet asked the right question. The more useful question is harder to answer: “Is demand actually declining, or are we failing to create new interest upstream?” Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with The insight appears when you look across systems The real issue becomes clear when you look beyond a single channel. The location analytics market still had strong growth potential, but your product was encountering a shortage of engaged audiences receptive to the message. That disconnect became clearer when you looked beyond paid media. Site engagement trends in analytics and brand search behavior in Search Console suggested interest in your type of location AI wasn’t disappearing. It just wasn’t converting yet. The focus had shifted from reach to engaged awareness, with a priority on attention and engagement, not just exposure. So your Acme team decided to introduce additional campaign layers, including new content designed to build relevance and trust. Crucially, you didn’t see any improvement right away. Cost-per-lead efficiency continued to decline, and it looked worse after increased upper-funnel investment. From a platform-only view, this looked like the time to pull back. But looking across systems changed how performance was interpreted. Engagement from awareness activity began feeding remarketing pools, but the impact wouldn’t surface immediately for a product with long sales cycles like your API. During that gap, the Acme team maintained confidence in its strategy by sharing early signs of upstream momentum. Only later did results begin to show up. Remarketing efficiency improved and higher sales volumes of the API were confirmed from integrated CRM data. The takeaway for the Acme Area Analytics marketing team wasn’t just that “remarketing worked again,” or that upper funnel activity drives demand. It’s that the hardest marketing decisions are the ones you have to make — and hold — before success shows up in the metrics leadership typically trusts. Get the newsletter search marketers rely on. See terms. Why the insight only appeared between dashboards In our Acme example, each dashboard told a technically accurate story, but no single dashboard could fully articulate the whole picture. Paid media dashboards reflected efficiency trends. Analytics and Search Console showed shifts in engagement and demand. CRM data lagged behind decisions by weeks or months. Looking at any of those in a silo wouldn’t have allowed Acme’s marketing team to fully understand what was happening. But we know that the insight didn’t live in any single view. When the question the team asked itself shifted to whether demand was moving effectively through the funnel, and dashboards were evaluated together in context, the decision changed. This is what unsiloed analytics looks like in practice. It’s not about teams fighting over which touch led to the result, but recognizing that each part of a marketing plan plays a distinct and important role in creating momentum that grows demand and lifts sales. Leadership wants proof. Pipeline and revenue might feel like the safest validation. But in complex, multi-channel programs, those are often lagging indicators of solid performance. By the time pipeline clearly reflects demand creation, teams have often already pulled back awareness investment, cut channels that looked inefficient in isolation, and shifted budget toward short-term demand capture. In the example above, waiting for proof would have meant that Acme reduced awareness and remarketing spend and possibly exited a market that would later show great promise. Integrated data didn’t eliminate the risk of shifting investment from lead generation to awareness-building in a market that had declining metrics. Instead, it added credibility to the case for doing so. Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era The same pattern at a smaller scale This dynamic isn’t limited to complex, multi-channel programs. You can see it even within a single platform when multiple tactics work together. Let’s look at a scenario where Acme’s brand search impression volume increased by roughly 50% year over year while Share of Voice remained flat. That means more people have been searching for Acme as the company has invested across out-of-home and other digital campaigns. Acme’s Google campaign then harvested the demand created by other channels. If Acme’s brand search had been evaluated only in terms of its media plan efficiency, this signal of growing demand would have been easy to miss. In context, it confirmed that Acme’s awareness efforts were working, even though attribution couldn’t perfectly assign credit to individual channels. What changes when data is integrated In these examples, integrated data — unsiloed data — shifted the conversation. Instead of Acme’s marketing teams debating budget cuts, they could monitor signs of early momentum, including longer time on site and rising brand search volume. Over time, that interest could be seen in the CRM as higher-quality leads that converted more frequently into closed deals. The good news is that this doesn’t require new tools or perfectly stitched together data. It simply requires stepping back during planning and asking better questions about how potential customers signal interest as they consider your product. Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with Seeing opportunity before it’s obvious In my experience, the most valuable marketing insights come from understanding how different data points relate. Unsiloing your data isn’t about proving causality or winning attribution debates. Instead, it’s about recognizing opportunity early enough to act on it and identifying which metrics suggest that demand is quietly being built in the background. The teams that win aren’t only better at reporting results. They’re better at seeing momentum while it’s still forming and acting on it early. View the full article
-
Police return passport to Mandelson after arrest
Metropolitan Police says investigation into former cabinet minister is ongoingView the full article
-
Economy loses 92,000 jobs in February
The Bureau of Labor Statistics reported that the economy lost 92,000 jobs in February while unemployment held steady at 4.4%, a development that could spur the Federal Reserve to question whether interest rates are truly in balance. View the full article
-
Hargreaves Lansdown postpones fee rises, but only for ‘valued’ clients
Move follows a number of wealthier customers switching to rival investment platformsView the full article
-
The U.S. job market is still under strain: report shows unemployment rose to 4.4% in February
American employers unexpectedly cut 92,000 jobs last month, a sign that the labor market remains under strain. The unemployment rate blipped up to 4.4%. The Labor Department reported Friday that hiring deteriorated from January, when companies, nonprofits and government agencies added a healthy 126,000 jobs. Economists had expected 60,000 new jobs in February. Revisions also cut 69,000 jobs from December and January payrolls. The job market had been expected to rebound this year from a lackluster 2025 when the economy, buffeted by President Donald The President’s erratic tariff policies and the lingering effects of high interest rates, generated just 15,000 jobs a month. Construction companies cut 11,000 jobs last month, which likely reflects reflect frigid weather. And healthcare firms shed 28,000 jobs after a four-week strike by more than 30,000 nurses and other front-line workers at Kaiser Permanente in California and Hawaii. The outlook for the job market – and the entire economy – is clouded by the war with Iran. Employers were reluctant to hire last year because of uncertainty over President Donald The President’s tariffs – and the unpredictable way he rolled them out. High interest rates, engineered by the Federal Reserve to combat a burst of inflation following the COVID-19 pandemic, also weighed on the job market in 2025. The impact of The President’s aggressive trade policies may recede in 2025. His import taxes became smaller and less erratic after he reached a trade truce last year with China and deals with leading U.S. trade partners such as Japan and the European Union. A lot of businesses have also learned how to offset the costs of the tariffs, often by passing them along to customers via higher prices. Businesses needed “a year to bake some of those costs into their business model, and now it’s time to get back to growth mode,” said Andy Decker, CEO of Atlanta-based Goodwin Recruiting. The Supreme Court has also struck down the biggest and boldest of The President’s tariffs – though he is replacing them with new ones. Still, hiring continues to lag far behind the hiring boom of 2021-2023 when the economy was bouncing back from pandemic lockdowns and the United States was adding nearly 400,000 jobs a month. Many economists describe today’s job market as “no-hire, no-fire”: Companies are reluctant to add workers but don’t want to let go of the ones they have. Luckily, achieving good-enough job growth is easier these days. Until a year or two ago, employers needed to hire well over 100,000 people a month to keep the unemployment rate from rising. But Baby Boomer retirements and President Donald The President’s deportations mean there are fewer people competing for work. So the break-even point is much lower – anywhere from zero to 50,000 jobs a month, said Joe Brusuelas, chief economist at the tax and consulting firm RSM. “Under the current conditions, 70,000 should be considered solid,” he said. Companies may be holding off on hiring as they buy, install and figure out how best to use new technologies, including artificial intelligence. AI, after all, potentially means they “can do more with less” and will need fewer workers, especially for entry-level positions, Brusuelas said. They are thinking, he said, “we’ve invested an awful lot of money in (capital expenditures), and we need to see how much we can produce with our current labor force… The last thing you want to do is hire a lot of young people and then let them go.” AP Economics Writer Christopher Rugaber contributed to this report. —Paul Wiseman, AP Economics Writer View the full article
-
US economy sheds 92,000 jobs in February in sharp slide
Figure far below expectations comes as doubts persist over labour market strengthView the full article
-
These Bowers & Wilkins ANC Headphones Are Nearly 60% Off Right Now
We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication. These factory-reconditioned Bowers & Wilkins Px7 S2 ANC headphones are currently $99 on Woot, compared with $239.99 for a new pair and around $149.99 for used listings on Amazon. Woot says the deal will run for about 26 days or until it sells out. Prime members get free standard shipping, while everyone else pays $6. “Factory reconditioned” in this case means the headphones were professionally inspected, passed a full diagnostic test, and come in the original retail box with accessories included. Bowers & Wilkins Px7 S2 ANC headphones $99.00 at Woot $239.99 Save $140.99 Get Deal Get Deal $99.00 at Woot $239.99 Save $140.99 The Px7 S2 use custom 40mm dynamic drivers, producing a sound with strong bass, clear highs, and a fairly natural balance. Comfort and design are another big part of the experience. The over-ear design uses memory-foam earpads lined with faux leather, while the headband and earcups combine fabric and aluminum accents. The result looks understated but expensive, and the fit tends to stay comfortable during long listening sessions, notes this PCMag review. Physical controls sit on the earcups, including buttons for playback, calls, and volume, plus a Quick Action button that cycles through active noise cancellation, pass-through mode, or off. The headphones connect through Bluetooth 5.0 and support several audio codecs, including AAC, AptX, and SBC. On compatible Android devices, those codecs allow higher-quality wireless audio and even 24-bit playback with the right streaming service. Apple users won’t see the same high-resolution benefits, since iPhones don’t support AptX codecs and mostly rely on AAC. You can plug in using the included USB-C cable or the USB-C-to-3.5mm audio cable if you want a wired connection. Battery life is rated at about 30 hours per charge, depending on listening volume and ANC use. On the downside, the noise cancellation isn’t as strong as the best models from Sony or Bose, and its companion app offers only a simple EQ. Still, at just $99, the highly rated Px7 S2 feels like a surprisingly strong value. Our Best Editor-Vetted Tech Deals Right Now Apple AirPods 4 Active Noise Cancelling Wireless Earbuds — $119.00 (List Price $179.00) Samsung Galaxy S26 Ultra, Unlocked Android Smartphone + $200 Gift Card, 512GB, Privacy Display, Galaxy AI, AI Camera, Super Fast Charging 3.0, Durable Battery, 2026, US 1 Year Warranty, Black — $1,299.99 (List Price $1,499.99) Samsung Galaxy Buds 4 AI Noise Cancelling Wireless Earbuds + $20 Amazon Gift Card — $179.99 (List Price $199.99) Google Pixel 10a 128GB 6.3" Unlocked Smartphone + $100 Gift Card — $499.00 (List Price $599.00) Apple iPad 11" 128GB A16 WiFi Tablet (Blue, 2025) — $329.00 (List Price $349.00) Apple Watch Series 11 [GPS 46mm] Smartwatch with Jet Black Aluminum Case with Black Sport Band - M/L. Sleep Score, Fitness Tracker, Health Monitoring, Always-On Display, Water Resistant — $329.00 (List Price $429.00) Amazon Fire TV Soundbar — $99.99 (List Price $119.99) Deals are selected by our commerce team View the full article
-
‘Always be testing’ worked in 2016 — it’s risky in 2026
If I hear “always be testing” one more time, I might scream. It was great advice in 2016. In 2026, it’s a great way to light your budget on fire. That mantra made sense when budgets were loose and platforms forgave a lot of chaos. Launch five audience tests simultaneously? Sure, why not! Swap out three creative variables at once? Go for it! But the rules have changed. Our new reality has tighter budgets, longer learning phases, and signal fragmentation everywhere. One poorly structured test can distort your performance for weeks, not days. That performance hit compounds fast. Modern experimentation is expensive and risky. Why pay that price when we have the power of agentic AI to help? And by help, I don’t mean slapping AI onto our existing process and asking it to generate more ad variants. That would just be an expedient way to light our budgets on fire. Instead, it’s time to use agentic AI to design smarter experimentation systems. The real cost of unstructured testing In an “always be testing” era, it was all too easy to throw things to test at the scale Oprah gives out cars or Taylor Swift fills auditoriums. It often led to unstructured testing where we launched ideas on a Monday and checked results on Friday hoping for a lift. There was nary a risk model, overlap detection, or strategic sequencing in sight. The costs of that approach are now exponentially higher. Take platform disruption. Algorithms crave stability. Industry benchmarks show ad sets stuck in learning phases often see CPAs 20-40% higher than stable sets. Every time you significantly change creative, audience, or budget, you risk resetting that learning. If you’re running three overlapping tests that each trigger resets, you’re voluntarily paying a volatility tax on your entire media spend. Then there’s waste. The majority of A/B tests deliver no statistically significant lift. If you aren’t ruthless about what deserves to run, you’re burning budget to prove most ideas don’t matter. “Always be testing” without guardrails turns into “always be destabilizing.” Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with From random tests to a real experimentation engine The shift looks like this. Old approach: “AI, write me 10 new headlines.” New approach: “AI, design the smartest next experiment within our budget, risk tolerance, and current learning state.” The reframe from creative generation to experimentation architecture is where real leverage lives. Here’s a practical seven-step framework to turn testing from a tactical habit into strategic infrastructure. Step 1: Set hard guardrails (humans draw the lines) Before you let any AI near your experiments, lock in constraints. Without them, AI lacks proper context. With them, AI becomes a disciplined strategic partner. Define and document five hard boundaries. Budget allocation: Reserve a fixed percentage (e.g., 10%) explicitly for testing. Maximum volatility: “No test can increase CPA by more than 15% for more than 5 days.” Learning phase sensitivity: Document reset thresholds per platform. Leading indicators: Use early signals (CTR, engagement drop-offs) to kill bad tests before they damage pipeline. Brand risk: Define off-limits positioning (e.g., no discount-heavy testing in enterprise segments). Document this in a single file (e.g., experimentation-guardrails.md) to teach AI the constraints that make ideas viable. Your AI agent must reference this before proposing any test. Step 2: Let AI audit your experiment history Most teams have the data sitting in spreadsheets, but never extract the lessons. Feed your last six months of test results into an AI agent and have it analyze variables changed, duration, performance delta, statistical confidence, and platform resets. Ask it to find patterns, such as: Over-tested variables: CTA buttons tested eight times with zero meaningful lift? That’s not a lever. False failures: Many tests are declared losers simply because they never reached statistical significance. An AI agent can quickly assess statistical power and flag inconclusive results. Volatility patterns: Often, your worst CPA weeks weren’t market shifts or a single bad creative, but rather the weeks where you launched three overlapping tests. This is how AI becomes a true analytical partner. Step 3: Write real hypotheses Rather than jumping straight from idea to launch, use AI to help you enforce hypothesis discipline. Weak: “Let’s test a new headline.” Strong: “If we emphasize ‘faster time-to-value’ over ‘ease of use,’ we expect a 10-5% lift in demo requests from mid-market companies because win/loss analysis shows speed is their top decision criterion.” Structured hypotheses create institutional memory. Six months later, when someone suggests testing “speed messaging” again, you’ll know exactly who it worked for and why. Yes, it feels like paperwork, but this discipline can protect your budget from algorithm chaos. Step 4: Risk-score every proposed test Budget isn’t infinite and neither is algorithm stability. Your AI agent should evaluate each proposed test across five dimensions and assign a risk score. Budget impact (e.g., <5% vs >15%). Algorithm disruption level (minor refresh vs new campaign). Audience overlap. Brand sensitivity. Learning value. High risk + low learning = Kill it. Low risk + high insight = Green light. Example: Testing a radical new enterprise positioning statement is high risk in a paid conversion campaign. Instead, your AI agent might suggest validating it first via organic LinkedIn content or low-budget audience polling. Low risk. High signal. Get the newsletter search marketers rely on. See terms. Step 5: Pre-test with synthetic audiences This is one of the most underused applications of AI in experimentation. Synthetic testing means simulating how different personas may react to messaging before spending media dollars, and the data backs it up. A study involving researchers from Stanford and Google DeepMind found that digital agents trained on interview data matched human survey responses with 85% accuracy and mimicked social behavior with 98% correlation. This makes synthetic audiences surprisingly useful for early-stage signal gathering. While they don’t replace real-world data (at least not yet), they can act as creative QA. Here’s how it works. Define psychographic archetypes. The Skeptical CMO (burned by vendors, risk-sensitive). The Growth VP (speed-obsessed). The CFO (margin-focused). Feed your proposed messaging into your AI system and ask, “How would the Skeptical CMO react to this?” You might get feedback like: “The phrase ‘All-in-One’ triggers skepticism. It signals feature bloat. Consider reframing as ‘Integrated’ or ‘Modular.’” That kind of signal costs pennies in API calls instead of thousands in paid testing. Step 6: Sequence tests, don’t stack them Changing audience, creative, and landing page in the same week teaches you almost nothing. Your AI agent should act like air traffic control: scan active campaigns, flag conflicts, and recommend sequencing. A better flow: Week 1-2: Audience test. Week 3-4: Creative test on the winning audience. If overlap is unavoidable, enforce clean holdout groups so you always have a source of truth. Step 7: Build a living knowledge base Treat tests like disposable experiments and you lose the compounding value. Have your AI auto-summarize every completed test: Why did it win? Who did it win with? How durable was the lift? What variables interacted? Over time, this database becomes your moat. Everyone can buy the same targeting. Few teams have 100+ validated customer truths at their fingertips. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with The bigger shift: From activity to architecture “Always be testing” was a growth-era mindset. In 2026, the winning mindset is “always be compounding intelligence.” Rather than more tests, build your competitive advantage through structured, risk-aware, insight-driven experimentation that protects algorithm stability and ties experimentation directly to revenue. The next time your stakeholder asks why you aren’t testing more, show them your experimentation architecture and say, “We’re not just running experiments. We’re building an intelligence engine.” Because intelligence compounds. View the full article
-
Pentagon follows through with its threat, labels Anthropic a supply chain risk ‘effective immediately’
The The President administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday that it has “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.” The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald The President and Defense Secretary Pete Hegseth accused the company of endangering national security. The President and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company’s products could be used for mass surveillance of Americans or autonomous weapons. Amodei said in a statement Thursday that “we do not believe this action is legally sound, and we see no choice but to challenge it in court.” The Pentagon statement said, “this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.” Amodei countered that the narrow exceptions Anthropic sought to limit surveillance and autonomous weapons “relate to high-level usage areas, and not operational decision-making.” He said there were “productive conversations” with the Pentagon in recent days over whether it could keep using Claude or establish a “smooth transition” if no agreement was reached. The President gave the military six months to phase out Claude, which is already widely embedded in military and national security platforms. Amodei said it’s a priority to make sure warfighters won’t be “deprived of important tools in the middle of major combat operations.” Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will “follow the President’s and the Department of War’s direction” and look to other providers of large language models. “We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work,” the company said. How the Defense Department will interpret the scope of the risk designation is unclear. Amodei said a notification Anthropic received from the Pentagon on Wednesday shows it only applies to Claude’s use by customers as a “direct part of” their military contracts. Microsoft said its lawyers studied the rule and the company “can continue to work with Anthropic on non-defense related projects.” Pentagon draws criticism for its decision The Pentagon’s decision to apply a rule designed to address supply threats posed by foreign adversaries was met with broad criticism. Federal codes have defined supply chain risk as a “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a system in order to disrupt, degrade or spy on it. U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it “a dangerous misuse of a tool meant to address adversary-controlled technology.” “This reckless action is shortsighted, self-destructive, and a gift to our adversaries,” she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like “massive overreach that would hurt both the U.S. AI sector and the military’s ability to acquire the best technology for the U.S. warfighter.” Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing “serious concern” about the designation. “The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent,” said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to “protect the United States from infiltration by foreign adversaries — from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute.” Anthropic sees boost in consumer downloads While losing big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. More than a million people signed up for Claude each day this week, the company said, lifting it past OpenAI’s ChatGPT and Google’s Gemini as the top AI app in more than 20 countries in Apple’s app store. The dispute with the Pentagon has also further deepened Anthropic’s bitter rivalry with OpenAI that started when ex-OpenAI leaders, including Amodei, started Anthropic in 2021. Hours after the Pentagon punished Anthropic last Friday, OpenAI announced a deal to effectively replace Anthropic with ChatGPT in classified military environments. OpenAI said it sought similar protections against domestic surveillance and fully autonomous weapons but later had to amend its agreements, leading CEO Sam Altman to say he shouldn’t have rushed a deal that “looked opportunistic and sloppy.” Amodei also expressed regret about his own part in that “difficult day for the company,” saying Thursday he wanted to “directly apologize” for an internal note he sent to Anthropic staff that attacked OpenAI’s behavior and suggested Anthropic was being punished for not giving “dictator-like praise” to The President. —Matt O’Brien and Konstantin Toropin, Associated Press View the full article
-
Silent killer: return of submarine war and death by torpedo
The sinking of an Iranian ship near Sri Lanka, the first of its kind in decades, has sparked scrutiny about Washington’s tacticsView the full article
-
5 Fashion Nova Discount Codes You Can’t Miss on RetailMeNot
If you’re looking to save on your next Fashion Nova purchase, you should know about the top discount codes available on RetailMeNot. For instance, using the E25 code gives you $25 off orders over $100, whereas the FNFAST code offers 30% off purchases exceeding the same threshold. New customers can likewise benefit from an exclusive 10% off, together with various seasonal promotions. Curious about additional codes that can maximize your savings? Key Takeaways Use code E25 for $25 off purchases over $100, and promo code EUP for 10% off smaller orders. New customers can enjoy exclusive discounts of up to 50% with minimum purchase requirements. Black Friday offers up to 50% off with the code FNBLACK50, including clearance items. RetailMeNot regularly updates discount codes, so check for the latest offers and savings. Combine discount codes with free shipping on orders over $75 for additional savings. 25 Off Your Order If you’re looking to save on your Fashion Nova order, there are several discount codes available that can help you reduce your total. One option is the Fashion Nova discount code RetailMeNot provides, which often features considerable deals. For example, you can use code E25 to get $25 off orders of $100 or more, making it a great choice for larger purchases. If you’re making a smaller order, consider the promo code EUP, which gives you 10% off your purchase. Furthermore, keep an eye out for seasonal promotions that can offer discounts of up to 50%, particularly during major events like Black Friday and Cyber Monday. By checking RetailMeNot regularly, you can find exclusive deals not available elsewhere. Don’t forget, combining these codes with ongoing promotions, such as free shipping on orders over $75, can greatly improve your savings. 30% Off Sitewide Fashion Nova frequently offers sitewide discounts that can make shopping more affordable for everyone. One popular option is a 25% off discount code for sitewide purchases, which requires a minimum order of $99. If you’re a frequent shopper, you can take advantage of a 30% off discount code, FNAC, valid on purchases over $100, allowing you to save even more on larger orders. New customers should likewise check for a 10% off promo code to improve their initial shopping experience. It’s important to note that RetailMeNot regularly updates Fashion Nova discount codes, ensuring you have access to the latest verified promo codes. Although these codes can provide significant savings, keep in mind that only one discount code can be applied per order. Thus, choose the most advantageous option to maximize your savings during shopping at Fashion Nova. Up to 50% Off Black Friday Deals As the holiday shopping season approaches, shoppers can take advantage of up to 50% off during Fashion Nova’s Black Friday deals, which presents a prime opportunity to save on a wide range of styles. This year’s promotions cover various items, ensuring significant discounts applied directly to prices across the store. You can boost your savings by using popular discount codes like FNBLACK50, which maximize your overall discounts during these events. Additionally, clearance items are included in the Black Friday sale, allowing for deeper savings by combining clearance prices with the Black Friday discounts. This means you could find exceptional deals where the total savings exceed $100 on larger orders. Don’t forget to utilize a promo code extension to track and apply your discounts seamlessly, ensuring you get the best possible price on your holiday purchases. Take advantage of these offers to refresh your wardrobe affordably this season. Free Shipping on Orders Over $75 After taking advantage of significant savings during the Black Friday deals, shoppers can further improve their experience with Fashion Nova’s free shipping offer on orders over $75. This promotion guarantees you save on delivery costs during enjoying your new styles. Free standard shipping typically delivers your items within 3-7 business days, making it a convenient choice for online shopping. Order Amount Free Shipping Available Notes Over $75 (USA) Yes Standard shipping only Over CAD $105 (Canada) Yes International orders Under $75 No Shipping fees apply Oversize Items No Check product specifications Combine Offers Yes Use with discount codes 10% Off for New Customers New customers can access significant savings at Fashion Nova with exclusive discount codes that offer up to 50% off your first purchase during special promotional events. To maximize your savings, regularly check the RetailMeNot mobile app for updated coupon codes customized particularly for new customers. These offers can change frequently, so staying informed is key. Typically, promotions may require a minimum purchase amount, often around $100, to release discounts like 30% off. Furthermore, Fashion Nova often provides a special welcome offer for new customers who sign up for their newsletter, which can include even more savings. Don’t forget to keep an eye on seasonal sales events, as new customer discounts are frequently highlighted during major sales like Black Friday and Cyber Monday. Frequently Asked Questions What Are Some Discount Codes for Fashion Nova? For Fashion Nova, you can use various discount codes to save on your purchases. One popular option is FNFAST, offering 30% off orders over $100. There’s furthermore a code for 25% off storewide on purchases of $99 or more. If you’re a new customer, sign up for the newsletter to get 10% off your first order. What Is the SBM50 Promo Code? The SBM50 promo code offers you a 50% discount on eligible purchases at Fashion Nova. Typically available during major sales events, this code has certain conditions, such as minimum spend requirements. You can only use it once per order, as Fashion Nova doesn’t allow stacking codes. To make the most of this offer, keep an eye on promotional updates to guarantee you can apply the SBM50 code when shopping. Can You Stack Discount Codes on Fashion Nova? No, you can’t stack discount codes on Fashion Nova. When you apply a new promo code, it replaces any existing code in your cart. This means you should always use the best available code to maximize your savings. Since only one code can be applied per order, it’s important to stay informed about current promotions and their restrictions to guarantee you get the most value during checkout. What Is the Fashion Nova 40% off Coupon? The Fashion Nova 40% off coupon offers significant savings on eligible purchases. To use it, you’ll need to meet a specified minimum order amount, which is outlined in the promotion’s terms. Keep in mind that this coupon can only be used once per customer and can’t be combined with other discounts. For the latest availability, check the Fashion Nova website or subscribe to their newsletters for updates on current promotions. Conclusion To conclude, Fashion Nova offers various discount codes on RetailMeNot that can improve your shopping experience. Whether you’re looking for $25 off a $100 purchase with the E25 code or a 30% discount using FNFAST on larger orders, there are options for everyone. New customers can likewise take advantage of exclusive discounts. With free shipping on orders over $75 and seasonal promotions, you can save greatly on your fashion purchases as you enjoy a wide selection. Image via Google Gemini and ArtSmart This article, "5 Fashion Nova Discount Codes You Can’t Miss on RetailMeNot" was first published on Small Business Trends View the full article
-
This Premium ASUS OLED Gaming Monitor Is Over $100 Off Right Now
We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication. High-refresh-rate gaming monitors are getting faster every year, but a 480Hz OLED panel still feels like a technical flex—and the ASUS ROG Swift OLED PG27AQDP is one such example. This 27-inch OLED gaming monitor is currently $662.36 on Amazon, down from its usual $799 price, and price trackers show that’s the lowest it has dropped so far. It sits in a very small group of monitors built around a 1440p panel with a 480Hz refresh rate, competing with models like the Sony Inzone M10S. It is designed first and foremost for high-end PC gaming, where extremely fast frame rates can actually make use of a panel this quick. ASUS ROG Swift OLED Gaming Monitor $662.36 at Amazon $799.00 Save $136.64 Get Deal Get Deal $662.36 at Amazon $799.00 Save $136.64 A big part of the appeal here is the OLED panel paired with Micro Lens Array+ (MLA+) technology, which helps the screen get brighter than most OLED monitors. The difference shows up in games with strong lighting contrast. Dark scenes show the deep blacks OLED is known for, while bright elements like explosions or neon lights stand out more clearly than they do on many IPS displays. Motion also looks exceptionally clean. The 480Hz refresh rate and near-instant OLED response times make fast movement easier to track in shooters and competitive games. ASUS also includes features such as Extreme Low Motion Blur, OLED Anti-Flicker, and support for all major variable refresh rate formats, including AMD FreeSync and NVIDIA G-SYNC compatibility. Connectivity is up to date as well, with HDMI 2.1 ports that support modern consoles and GPUs. The performance is impressive, but the experience is not perfect. The hardware delivers exactly what competitive players want, yet the software side still feels rough around the edges. Some users report bugs where settings reset or behave unpredictably. There is also noticeable VRR flicker when frame rates change, and input lag increases when the monitor receives a 60Hz signal, which is something to keep in mind if you plan to use it for slower console games or everyday media. Still, for players chasing extremely high refresh rates and OLED contrast, this is among the most capable options available. Our Best Editor-Vetted Tech Deals Right Now Apple AirPods 4 Active Noise Cancelling Wireless Earbuds — $119.00 (List Price $179.00) Samsung Galaxy S26 Ultra, Unlocked Android Smartphone + $200 Gift Card, 512GB, Privacy Display, Galaxy AI, AI Camera, Super Fast Charging 3.0, Durable Battery, 2026, US 1 Year Warranty, Black — $1,299.99 (List Price $1,499.99) Samsung Galaxy Buds 4 AI Noise Cancelling Wireless Earbuds + $20 Amazon Gift Card — $179.99 (List Price $199.99) Google Pixel 10a 128GB 6.3" Unlocked Smartphone + $100 Gift Card — $499.00 (List Price $599.00) Apple iPad 11" 128GB A16 WiFi Tablet (Blue, 2025) — $329.00 (List Price $349.00) Apple Watch Series 11 [GPS 46mm] Smartwatch with Jet Black Aluminum Case with Black Sport Band - M/L. Sleep Score, Fitness Tracker, Health Monitoring, Always-On Display, Water Resistant — $329.00 (List Price $429.00) Amazon Fire TV Soundbar — $99.99 (List Price $119.99) Deals are selected by our commerce team View the full article
-
Tech and finance layoffs: Oracle, Block, Morgan Stanley, Capital One headline brutal week for job losses
The past week has been a brutal one for many working in the tech and financial industries. Thousands of jobs have been lost—or will be lost soon—from companies including Block, Morgan Stanley, Capital One, eBay, and, as reported today, software giant Oracle. Here’s what you need to know about the layoffs. Oracle to cut ‘thousands’ of jobs The most recent news of layoffs came yesterday, after Bloomberg reported that the database software giant Oracle Corporation (NYSE: ORCL) is planning to cut “thousands” of jobs as soon as this month. And yes, artificial intelligence is to blame—but not solely because AI is directly taking jobs. Instead, Oracle is reportedly planning job cuts to free up cash for its AI data center expansion, which the company is pursuing to compete with cloud computing giants Amazon and Microsoft. However, Bloomberg’s report noted that some of the jobs lost will be jobs “that the company expects it will need less of due to AI.” It is unknown exactly how many jobs will be lost, with Bloomberg noting that Oracle’s workforce reduction plans are “still active and could change.” Fast Company has reached out to Oracle for comment. As of May 2025, Oracle has around 162,000 employees. Capital One lays off over 1,100 workers On the same day of the Oracle job cuts report, financial giant Capital One (NYSE: COF) said that it was laying off more than 1,100 employees, according to CBS News. But these layoffs have nothing to do with AI. They follow Capital One’s acquisition of credit card giant Discover last year, which cost the company $50 billion. Shortly following that acquisition, 600 employees were laid off. Now, another 1,100 are expected to lose their jobs—primarily those who worked at the former Discover headquarters in Riverwoods, Illinois. A Capital One spokesperson confirmed the layoffs to CBS News, stating, “As part of our continued journey to integrate Discover with Capital One, we announced the difficult decision to eliminate some Discover associate roles across the organization.” Morgan Stanley eliminates 2,500 roles A day before the Capital One layoffs were reported, the Wall Street Journal reported that investment banking giant Morgan Stanley (NYSE: MS) was laying off around 2,500 workers, or about 3% of its roughly 83,000-strong workforce. The job cuts reportedly hit employees in three divisions: investment banking and trading, wealth management, and investment management, and are reportedly “tied to shifting business and location priorities,” according to the Journal, which cited anonymous sources. Fast Company reached out to Morgan Stanley for comment. Block layoffs decimated 4,000 jobs The most significant round of layoffs, however, came from Jack Dorsey’s Block (NYSE: XYZ). Last Friday, the fintech company cofounded by one of Twitter’s original cofounders announced sweeping job cuts totaling 4,000 positions. And Dorsey didn’t beat around the bush as to the reasons for the layoffs: AI. As Fast Company previously reported, Dorsey said his Block workforce was shrinking from 10,000 employees to just 6,000 due to the company’s increasing use of “intelligence tools,” which have allowed it to function with a “significantly smaller team.” “I don’t think we’re early to this realization. I think most companies are late,” Dorsey said in a memo published online. “Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes. I’d rather get there honestly and on our own terms than be forced into it reactively.” eBay cuts 800 jobs The Block layoffs came just one day after legacy online shopping giant eBay (Nasdaq: EBAY) announced it was cutting about 6% of its workforce, or around 800 jobs. As Fast Company reported, the layoffs came about a week after eBay acquired the second-hand clothing app Depop from Etsy for $1.2 billion. “We are taking steps to reinvest across our business and align our structure with our strategic priorities, which will affect certain roles across our workforce,” an eBay spokesperson told Fast Company. “We are grateful for the contributions of the employees impacted and are committed to supporting them with care and respect.” Layoff announcements actually fell in February If there’s a silver lining at all to this, it’s that total layoffs appear to have fallen in February by a significant amount, according to a report from the outplacement firm Challenger, Gray & Christmas. The firm said that U.S.-based layoff announcements plunged by 55% in February, to 48,307, versus the month before. However, that dramatic fall is only as sharp as it is because January saw over 108,000 layoff announcements. For the month of February, the firm says the most jobs lost were in the technology sector, with about 11,000 jobs cut. Education came in second place, with around 5,400 jobs lost. Industrial Manufacturing came in third with about 4,100 jobs lost. Yet Challenger, Gray & Christmas cautions that the decline in job cuts might not last. “February’s dip is a nice reprieve from the elevated job cut plans to start the year,” the firm’s chief revenue officer, Andy Challenger, noted. “With U.S. involvement in a growing war in Iran, the end of Q1 may bring more layoff plans as companies tighten belts amid uncertainty and higher costs.” View the full article
-
Search News Buzz Video Recap: Google Heat Continues, AI Mode Recipe Link Cards, ChatGPT Web Search With Fewer Links & AI-Generated Search Landing Pages
This week...View the full article
-
AIO Citations Diverge From Rankings, Bing Rewrites Rules – SEO Pulse via @sejournal, @MattGSouthern
In SEO Pulse: AI Overview citations drift further from traditional rankings as AI search expands and platforms clarify how content appears in AI answers. The post AIO Citations Diverge From Rankings, Bing Rewrites Rules – SEO Pulse appeared first on Search Engine Journal. View the full article
-
Why most video ads fail — and what video metrics actually matter
Video advertising has never been easier to distribute. Platforms can deliver impressions and views at an enormous scale across YouTube, paid social, short-form video, and connected TV. But distribution isn’t the same as effectiveness. Many campaigns generate impressive platform metrics while producing little measurable business impact. The problem usually isn’t targeting, budget, or platform choice. It’s a deeper strategic issue: campaigns are optimized for outputs like views and impressions rather than outcomes like attention, persuasion, and action. Most video ads fail because they misunderstand attention Poor targeting, limited budgets, and platform choice are rarely the real problem. The bigger issue is that many video ads are still produced as if they’re television commercials. In the early days of online video, distribution was the challenge. Getting a video seen at all felt like a win. Today, distribution is abundant. Attention isn’t. Every major platform — YouTube, paid social, short-form video, connected TV — competes for fragments of cognitive bandwidth. Users arrive with intent, habits, and expectations that have nothing to do with your campaign. We plan for reach, while viewers respond to relevance. I’ve sat in many meetings where success was defined by impressions delivered or views accrued. But when you look downstream — search lift, site engagement, conversion — the connection often disappears. Platforms will reliably deliver impressions. Turning those impressions into memory, persuasion, or action requires a fundamentally different mindset. Dig deeper: From Video Action to Demand Gen: What’s new in YouTube Ads and how to win Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with The first five seconds are the entire negotiation Skippable formats changed video advertising permanently, but many advertisers still haven’t adjusted creatively. Early in my career, I believed strongly in branding up front. Logos, product shots, music cues — everything that signaled professionalism. Those ads looked great in presentations. They underperformed in market. A clear pattern emerged over time. Ads that opened with a recognizable problem, a provocative statement, or an unexpected visual held attention longer — even when branding appeared later. Ads that opened with branding signals were skipped almost reflexively. View-through rate isn’t persuasion. A “view” simply means the platform’s minimum threshold was met. It doesn’t mean the message landed, the brand registered, or the viewer cared. In multiple brand lift analyses, most measurable impact occurred before the skip button appeared. If the opening didn’t earn attention, the rest of the ad didn’t matter. What works: treat the opening frame like a headline, not a preamble. Lead with tension, a question, or a familiar problem. Design for sound-off environments. If the first frame wouldn’t stop a scroll, nothing that follows will matter. Higher production value often correlates with lower performance One of the most counterintuitive lessons in modern video advertising: polished ads frequently underperform scrappier ones. I’ve seen simple, phone-shot videos outperform meticulously produced studio spots across YouTube, paid social, and short-form platforms. Not because quality doesn’t matter — but because perceived authenticity matters more. Audiences are exceptionally good at identifying advertising. When something looks like an ad, they disengage. When it looks like content, they give it a chance. Algorithms reinforce this: they reward watch time, retention, rewatches, and shares. They do not reward lighting setups or production budgets. I’ve seen brands “upgrade” social video to look more premium, only to watch performance decline. The creative looked better. The results were worse. The goal isn’t to look amateurish. It’s to look like you belong. Match the platform’s visual grammar. Prioritize clarity over polish. Use real people and authentic voices whenever possible. Ads that feel native get watched. Ads that feel inserted get skipped. Dig deeper: How to get better results from Meta ads with vertical video formats Get the newsletter search marketers rely on. See terms. Length is a creative decision, not a media constraint “Shorter is better” is one of the most persistent — and misleading — rules in video advertising. Six-second ads can work. So can 60-second ads. I’ve seen both exceed expectations, and I’ve seen both fail badly. The difference was never duration — it was justification. Some messages can be delivered instantly. Others require context, proof, or emotional buildup. Forcing every idea into the same runtime produces predictable results: safe, bland, forgettable ads. I’ve reviewed retention graphs where a 45-second ad held viewers longer than a 15-second version, because the story justified its length. I’ve also seen six-second ads lose half their audience in the first two seconds because they wasted the opening. Test multiple edits, not just multiple lengths. Watch retention curves, not averages. Build modular narratives: hook, then value, then proof, then action. The “right” length is however long it takes to make the viewer feel their time was respected. Metrics are signals Platforms provide more data than ever. The problem isn’t a lack of metrics. It’s confusing metrics with outcomes. I’ve seen campaigns praised for high completion rates that produced no measurable business impact. Strong engagement coexisting with low conversion. Impressive view counts that delivered zero lift. This happens because platforms optimize for their success metrics, not yours. If your goal is to maximize views, the platform can do that easily. If your goal is to influence consideration, preference, or action, things get more complicated. One uncomfortable question I’ve learned to ask early: what would failure look like here? If the answer is vague, the campaign is already at risk. Define success in business terms before launch. Tie video metrics to downstream behavior wherever possible. Use lift studies, holdouts, or assisted conversions when they’re available. If you’re running a brand-building campaign, measure brand lift. If you’re running a performance campaign, measure conversions. Dig deeper: AI for video advertising: 5 best practices for PPC campaigns The brief is usually where things go wrong Creative is often blamed when video ads underperform. In reality, creative usually does exactly what it was asked to do. The problem is the brief. Vague objectives produce generic ads. “Brand awareness” without context leads to unfocused messaging. “Make it engaging” isn’t a strategy. Strong video ads almost always begin with clear answers to three questions: Who is this really for? What do they care about right now? What should they think, feel, or do differently after watching? When those answers are clear, creative decisions become easier. When they aren’t, the work is compromised before production begins. The deeper diagnostic questions are worth keeping close: Are viewers actually paying attention, or just passively present? What are they feeling — and which specific creative choices are driving that response? Will they remember the brand once the ad ends? What will they do next — share it, recommend it, search for the product, or buy? I’ve seen entire campaigns improve simply because the brief forced alignment around audience insight rather than assumptions. Distribution strategy is part of the creative Another common mistake is treating creative and distribution as separate decisions. They aren’t. The way an ad is consumed — fullscreen versus feed, sound-on versus sound-off, lean-back versus lean-forward — should shape how it’s made. A video designed for connected TV shouldn’t simply be resized for mobile. A short-form ad shouldn’t be a truncated long-form story without rethinking the hook entirely. I’ve seen strong ideas underperform because the creative didn’t match the placement. The concept wasn’t wrong. The context was. Design with placement in mind from the start. Create platform-specific versions, not one-size-fits-all assets. Accept that “reuse” often means “rethink,” not “repurpose.” Distribution constraints aren’t limitations — they’re creative inputs. Dig deeper: How to dominate video-driven SERPs Testing should answer questions, not just generate variants Testing is indispensable. It’s also frequently misunderstood. Running endless A/B tests without a hypothesis rarely produces insight. It produces noise. The most effective testing focuses on variables that materially affect attention and comprehension: opening frames, narrative structure, on-screen text versus voiceover, proof points versus emotional appeals. It’s also important to recognize what testing can’t do. Algorithms are excellent at optimizing toward measurable signals. They don’t understand brand equity, long-term memory, or cumulative effect. Testing should inform judgment — not replace it. Ultimately, the only thing that matters for creative effectiveness tools is whether their predictions actually correlate to real media and sales outcomes — reliably enough to inform strategy and media decisions. The question worth asking of any such tool is simple: How often does what it predicts will happen actually happen? For example, I frequently cite data from DAIVID, an AI-driven creative effectiveness platform. Why? Because in independent testing, DAIVID’s predictions aligned with real-world outcomes more than 80% of the time — a meaningful foundation for making creative decisions with greater confidence before a campaign goes live. See the complete picture of your search visibility. Track, optimize, and win in Google and AI search from one platform. Start Free Trial Get started with Optimize for people Platforms will change. Formats will evolve. Algorithms will shift in opaque and sometimes frustrating ways. But attention, curiosity, and trust remain stubbornly human. The best video ads I’ve worked on weren’t optimized for view counts or completion rates. They were optimized for relevance. They respected the viewer’s time. They said something worth hearing. Video ads don’t succeed because they follow platform rules. They succeed because they understand people. And that principle outlasts every algorithm update. View the full article
-
Google: Most Sites Don't Need To Disavow Links But That's Not All Sites
Google's John Mueller again spoke about the disavow link file. This time he said that while "most sites don't need it, he added but "that's not all sites." Some sites may indeed need to disavow links.View the full article
-
Bing Search Tests Go To Shopping Button
Microsoft is testing a "Go to Shopping" button within the Bing Search results. This is instead of the narrower shopping section that shows shopping results by just says "see all."View the full article
-
Bing With Asian Owned Labels On Microsoft Ads
A few years ago, Microsoft Advertising the support of Asian owned labels and attributes on its search ads within Bing. Google has a similar attribute, by the way. Honestly, I've never seen the label on Bing, until now.View the full article
-
Local governments could deploy AI for good. Here’s how
When considering AI’s impact in cities, many residents and government officials envision a dark future of unbridled surveillance, hollowed-out city halls and unaccountable bots calling the shots based on biased training data. We, on the other hand, embrace a much more optimistic vision. With ambitious local leadership, AI, and especially the coming wave of agentic AI, can offer a profound opportunity not only to make government services more efficient but also to transform how cities fulfill their end of the social contract. As long-time public servants and champions of government innovation at our respective universities, we understand the challenges local governments face, including tight budgets, aging infrastructure and dissatisfied residents accustomed to the speed of Amazon and personalization of Spotify. Most cities still run on a century-old operating system built on bureaucracy, paper files, agency silos and rigid hierarchy. Agentic AI offers a unique opportunity to redesign how cities work, a model we call the “Agentic City.” Agents, city employees, and citizens working together Imagine a city administration where the complexity of navigating government bureaucracy is offloaded to intelligent agents—so routine tasks happen flawlessly, and even complex ones feel simple. A mother reports a broken sidewalk near her child’s school, snaps a photo, and sends it to the city. An AI agent classifies the problem, routes it to the right crew, tracks progress across agencies, proactively updates her until the work is done, and alerts others to similar risks nearby. Imagine a city that fixes pavement cracks before they become potholes, changes street lamps before they burn out, and repairs water lines before they leak. Yet even these dramatic improvements will only constitute steps in a transformation. These tools help reform-minded mayors adopt a system approach that sidesteps the strong headwinds often confronting business reengineering, including efforts to integrate agency functions or disparate data systems. Transportation officials no longer need to tweak signals; an AI traffic agent can balance safety, travel time and emissions. An Agentic City will be one in which agents, public employees and residents work together. Ultimately, all city services will be personalized as residents use an “agentic front door” to state their goals (“want to open a barber shop at 10th and Main”). Agents will walk users through the process or even complete those tasks for them. At the same time, a human monitors the results, troubleshoots and takes on difficult or unusual cases. In fact, this city offers preemptive housing vouchers, rental assistance, and property tax relief to those who qualify, obviating the application maze entirely. A systemic approach Getting there will require strong leadership to overcome gaps in imagination, skill deficits, and employee anxiety, compounded by the complexity of ensuring that AI changes comply with democratic values. Local leaders will need to take a systematic approach, crafting a powerful narrative of the service benefits while using their political and legal skills to negotiate with the city council, union, and employee leaders. AI-driven transformation requires a leadership team supported by academic and other local experts who understand the city’s technical capacity, legal and data limitations, and that stretches the imagination of a bureaucracy accustomed to existing processes. That team should establish a pathway for opportunities for both employees and residents, including the agentic front door, repetitive functions that can be outsourced to AI, and more time for staff to take on higher-value purposes: investigating root causes, engaging communities, and exercising judgment. Third, the leadership team should promote the incorporation of agentic capabilities that help employees identify patterns and causes of recurring problems by making data more easily accessible. Municipal workforces, both union and nonunion, represent a key stakeholder. Mayors need to be clear that AI will complement, not replace, the workforce. An Agentic City initiative would include outreach to labor to set the parameters of a new bargain in which workers, armed with data insights, increase productivity and share in the benefits through pay increases. Data literacy training and a data governance framework should also be essential components. Freed of repetitive tasks, public employees can focus on higher-value work. The data foundation Addressing these concerns responsibly begins with the system’s foundation: the data. Cities must invest in data pipelines that are not merely machine-readable but machine-understandable—structured with rich metadata, shared ontologies, and business-logic context—so that both humans and AI agents can interpret meaning, constraints, and appropriate use. Emerging approaches such as Model Context Protocols (MCPs), which standardize how AI systems access structured data and operational tools, represent a promising step in this direction by helping agents understand not only what data exists but also how it should be used. An agent that can “see” a permit record but not understand the regulatory framework, eligibility rules, or data quality limitations behind it will act inconsistently and require constant human correction. Machine-understandable data reduces that friction and makes agentic systems more reliable, transparent, and scalable. In short, the foundation of an Agentic City is not just smarter algorithms, but smarter data architecture. Implementing an agentic city hall presents substantial challenges. However, now is the time to lead, as mayors cannot afford to maintain the status quo or wait for the AI tsunami. Going forward presents challenges as well. Doing nothing poses a greater risk than getting started, and the evidence will be a city that, through more meaningful work for its employees, becomes more responsive to its residents. View the full article
-
Google Local Service Ads Won't Credit Calls For Existing Clients (Not Lead)
Google Local Service Ads can be super expensive; each call or click can cost hundreds of dollars. Which is why Google has generally been good about refunding for mistaken leads or issues with those leads. View the full article