All Activity
- Past hour
-
Shopping Ads Testing In AI Mode, Microsoft’s AI Search Guide & Keyword Strategy Shift – PPC Pulse via @sejournal, @brookeosmundson
The latest PPC Pulse highlights Google’s AI Mode ad experiments, Microsoft’s AI discovery framework, and the continued evolution of search campaign structure. The post Shopping Ads Testing In AI Mode, Microsoft’s AI Search Guide & Keyword Strategy Shift – PPC Pulse appeared first on Search Engine Journal. View the full article
-
AI is still both more and less amazing than we think, and that’s a problem
Hello again, and welcome back to Fast Company’s Plugged In. A February 9 blog post about AI, titled “Something Big Is Happening,” rocketed around the web this week in a way that reminded me of the golden age of the blogosphere. Everyone seemed to be talking about it—though as was often true back in the day, its virality was fueled by a powerful cocktail of adoration and scorn. Reactions ranged from “Send this to everyone you care about” to “I don’t buy this at all.” The author, Matt Shumer (who shared his post on X the following day), is the CEO of a startup called OthersideAI. He explained he was addressing it to “my family, my friends, the people I care about who keep asking me ‘so what’s the deal with AI?’ and getting an answer that doesn’t do justice to what’s actually happening.” According to Shumer, the deal with AI is that the newest models—specifically OpenAI’s GPT-5.3 Codex and Anthropic’s Claude Opus 4.6—are radical improvements on anything that came before them. And that AI is suddenly so competent at writing code that the whole business of software engineering has entered a new era. And that AI will soon be better than humans at the core work of an array of other professions: “Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service.” By the end of the post, with a breathlessness that reminded me of the Y2K bug doomsayers of 1999, Shumer is advising readers to build up savings, minimize debt, and maybe encourage their kids to become AI wizards rather than focus on college in the expectation it will lead to a solid career. He implies that anyone who doesn’t get ahead of AI in the next six months may be headed for irrelevance. The piece—which Shumer told New York’s Benjamin Hart he wrote with copious assistance from AI—is not without its points. Some people who are blasé about AI at the moment will surely be taken aback by its impact on work and life in the years to come, which is why I heartily endorse Shumer’s recommendation that everyone get to know the technology better by devoting an hour a day to messing around with it. Many smart folks in Silicon Valley share Shumer’s awe at AI’s recent ginormous leap forward in coding skills, which I wrote about last week. Wondering what will happen if it’s replicated in other fields is an entirely reasonable mental exercise. In the end, though, Shumer would have had a far better case if he’d been 70% less over the top. (I should note that the last time he was in the news, it was for making claims involving the benchmark performance of an AI model he was involved with that turned out not to be true.) His post suffers from a flaw common in the conversation about AI: It’s so awestruck by the technology that it refuses to acknowledge the serious limitations it still has. For instance, Shumer suggests that hallucination—AI stringing together sequences of words that sound factual but aren’t—is a solved problem. He writes that a couple of years ago, ChatGPT “confidently said things that were nonsense” and that “in AI time, that is ancient history.” It’s true that the latest models don’t hallucinate with anything like the abandon of their predecessors. But they still make stuff up. And unlike earlier models, their hallucinations tend to be plausible-sounding rather than manifestly ridiculous, which is a step in the wrong direction. The same day I read Shumer’s piece, I chatted with Claude Opus 4.6 about newspaper comics—a topic I often use to assess AI since I know enough about it to judge responses on the fly—and it was terrible about associating cartoonists with the strips they actually worked on. The more we talked, the less accurate it got. At least it excelled at acknowledging its errors: When I pointed one out, it told me, “So basically I had fragments of real information scrambled together and presented with false confidence. Not great.” After botching another of my comics-related queries, Claude said, “I’m actually getting into shaky territory here and mixing up some details,” and asked me to help steer it in the right direction. That’s an intriguing glimmer of self-awareness about its own tendency to fantasize, and progress of a sort. But until AI stops confabulating, describing it as being “smarter than most PhDs,” as Shumer does, is silly. (I continue to believe that human capability is not a great benchmark for AI, which is already better than we are at some things and may remain permanently behind in others.) Shumer also gets ahead of himself in his assumptions about where AI might be in the short-term future when it comes to being competently able to replace human thought and labor. Writing about the kind of complex work tasks he recommends throwing AI’s way as an experiment, he says, “If it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly.” That seems extraordinarily unlikely, given that all kinds of generative AI have been stuck in the “kind-of-works” era for years now. A decent rule of thumb: Don’t believe AI will be able to do something well until it actually does. Ultimately, the takeaway from Shumer’s post I’ll remember most isn’t anything he wrote. In the spirit of AI experimentation, I fed his piece to ChatGPT, Gemini, and Claude with the prompt “Give me an analysis/critique of this essay. Tell me whether it’s overly cautious, not cautious enough, what your own take is on the subjects discussed, etc.” I was prepared for them all to respond with something facile but shallow, more a bullet-point summary than anything. ChatGPT and Gemini lived up to those expectations. However, Claude’s swing at the topic—which it popped out in seconds, and you can read in its entirety here—startled me. It took issue with some of the same things about the piece that bothered me, but also raised other weaknesses I hadn’t considered. I felt smarter after reading it, which still isn’t what I expect from AI prose. A few nuggets: Coding is a uniquely favorable domain for AI — the outputs are verifiable, the feedback loops are tight, the training data is abundant, and the environment is fully digital. Lawyering, medicine, and management involve ambiguity, institutional context, human relationships, regulatory accountability, and adversarial dynamics that are qualitatively different. He acknowledges this briefly and then waves it away. …. Yes, many people tried GPT-3.5 and wrote off the whole field. That’s a real problem. But plenty of serious, current users — including people paying for the best models — still encounter significant limitations: hallucination (reduced but not eliminated), brittleness on novel problems, inability to maintain coherent context over truly long projects, poor calibration about its own uncertainty, and difficulty with tasks requiring real-world grounding. Shumer treats all skepticism as outdated, which is intellectually dishonest. …. He also doesn’t seriously engage with the possibility that the economic incentives might not play out as smoothly as “AI can do the job, therefore the job disappears.” Adoption friction is real. Liability concerns are real. Organizational inertia is real. The history of technology suggests that even transformative tools take much longer to restructure industries than the pure capability timeline would suggest. Electricity took decades to reshape manufacturing even after it was clearly superior to steam. … I think the underlying signal Shumer is pointing at is real. AI capabilities are improving faster than most people outside the industry appreciate, and the gap between public perception and current reality is genuinely large. He’s right that most people should be engaging with these tools more seriously than they are. But I think the essay is doing a thing that’s very common in Silicon Valley: mistaking the potential trajectory for the inevitable trajectory, and compressing timelines based on vibes and extrapolation rather than grappling seriously with the many real-world frictions that slow adoption. The piece reads like it was written in the immediate emotional aftermath of being impressed by a new model release — and those moments tend to produce overconfidence about pace. To recap: In the same day that I found Claude Opus 4.6 writing something about Shumer’s piece that was not only coherent but insightful, it also devolved into a hallucinogenic fit. That’s just how AI is these days: amazing and terrible at the same time. Somehow, that reality is tough for many observers to accept. But any analysis that ignores it is at risk of badly misjudging what will come next. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on fastcompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company Developers are still weighing the pros and cons of AI coding agents The tools continue to struggle when they need to account for large amounts of context in complex projects. Read More → AI expert predicted AI would end humanity in 2027—now he’s changing his timeline The former OpenAI employee has rescheduled the end of the world. Read More → Discord is asking for your ID. The backlash is about more than privacy Critics say mandatory age verification reflects a deeper shift toward routine identity checks and digital surveillance. Read More → A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . Palantir Current and former employees tell Fast Company the ad campaign is driven by opposition to the Democratic hopeful’s support for AI regulation. Read More → Facebook’s new profile animation feature is Boomerang for the AI era The feature is part of a wider push toward AI content in Meta apps. Read More → MrBeast’s business empire stretches far beyond viral YouTube videos Banking apps, snack foods, streaming hits, and data tools are all part of Jimmy Donaldson’s growing $5 billion portfolio under Beast Industries. Read More → View the full article
- Today
-
Europe needs ‘emergency mindset’ to survive, warns Danish PM
Mette Frederiksen tells FT that Greenland crisis has shown the ‘old world will not come back’View the full article
-
15 Smarter Interview Questions For Hiring Digital Marketers In 2026 via @sejournal, @brookeosmundson
Hire better digital marketers with these interview questions that expose real strategic and performance thinking. The post 15 Smarter Interview Questions For Hiring Digital Marketers In 2026 appeared first on Search Engine Journal. View the full article
-
Popeyes is losing the chicken sandwich wars. Its comeback plan starts with low-performing locations
Once the king of the chicken sandwich, Popeyes faces a lot of competition for the crown these days. Ascendant fried chicken hotspot Raising Cane’s exploded in growth last year, knocking off KFC to become the third most-popular fast food chicken chain in the U.S. behind Chick-fil-A and Popeyes. Meanwhile, upstarts like Dave’s Hot Chicken and Hangry Joe’s Hot Chicken & Wings are growing fast and eyeing a similar trajectory. Popeyes once inspired feverish hordes and all-day lines for its top-selling chicken sandwich, but it’s been a rocky ride as of late. Popeyes parent company Restaurant Brands International (RBI) just reported its quarterly earnings, and In the last quarter, the chicken chain’s U.S. sales were down nearly 5%—its fourth consecutive quarterly slide. Other fast food brands under RBI’s umbrella saw sales tick up during the same time period. Beyond Popeyes Louisiana Kitchen, RBI also owns Burger King, Tim Hortons, and Firehouse Subs. With almost 20,000 locations, Burger King is RBI’s biggest chain, dwarfing the 5,000 Popeyes locations around the globe. “We’ve had weaker performance than we’d like over the last few quarters, and that’s why you saw us make the change in leadership,” RBI CEO Josh Kobza said on the company’s earnings call. He noted the company’s decision to bring former Burger King COO Peter Perdue in as Popeyes U.S. and Canada president. Popeyes also plans to triage its lowest-performing locations with targeted support, coaching visits and “experience rallies” for Popeyes restaurant general managers across the U.S. Kobza said that Popeyes plans to double down on operations and “narrow the focus” back to chicken on the marketing and product side. “We know Popeyes is capable of much more and we’re taking decisive action to put the brand back on the right path while supporting our franchisees to deliver stronger results at the restaurant level,” Kobza said. Reviving Popeyes In January, almost 20 Popeyes locations in George and Florida closed their doors after one of the chicken chain’s major operators declared bankruptcy. While Popeyes says that the majority of the 100-plus locations operated by franchisee Sailormen Inc. were profitable, borrowing rates, high inflation, and dwindling foot traffic contributed to the closures. Popeyes insists that the closures don’t reflect the broader brand, which is owned by quick-service restaurant conglomerate RBI. Perdue reportedly reassured other franchisees that Sailormen’s bankruptcy “does not reflect the healthy unit economics that you are experiencing in your restaurants.” For Popeyes, the problem clearly isn’t chicken. Persistent inflation continues to take a toll on the restaurant industry, but Americans are still opting for poultry on the go at Popeyes’ competitors like Raising Cane’s and Dave’s Hot Chicken. Traffic is down at fast food joints broadly too, but chicken restaurants lapped their lagging peers last year. For Popeyes, the problem is Popeyes—something the company seems well aware of right now. “Our performance this year reinforces a clear reality,” Kobza said in the earnings report, noting the intense level of competition in the quick-service chicken game. “At its core, the chicken business is a service business and winning requires consistent speed, accuracy and reliability in every restaurant every day.” View the full article
-
Advertising made the internet accessible. Will it do the same for AI?
Advertising in generative AI systems has become a fault line. Last month, OpenAI released that it would start running ads in ChatGPT. Speaking at the World Economic Forum in Davos, OpenAI’s chief financial officer defended the introduction of ads inside ChatGPT, arguing that it is a way to “democratize access to artificial intelligence,” and that this decision is aligned with its mission: “AGI for the benefit of humanity, not for the benefit of humanity who can pay.” Within days, Anthropic fired back in a Super Bowl commercial, ridiculing the idea that ads belong inside systems people trust for advice, therapy, and decision-making. In some way, this is a spat about how each company is marketing itself. In another way, this debate echoes the debates about the early internet, but with far higher stakes. The big question The underlying question is not whether advertising generates revenue. It clearly does. But rather: is advertising the only viable way to fund AI at scale. And whether, if adopted, it will quietly dictate what these systems optimize for. History offers a cautionary answer. The last several decades of online advertising has proven that when profit is decoupled from user value, incentives drift toward harvesting data and maximizing engagement—the variables that can be most easily measured and monetized. That trade-off shaped everything in the internet economy. As advertising scaled, so did the incentives it created. Attention became a scarce resource. Personal information became currency. What Google taught us Google’s founders themselves acknowledged this risk at the dawn of the modern web. In their 1998 Stanford paper, Sergey Brin and Larry Page warned that ad-funded search engines create inherent conflicts of interest, writing that such systems are “biased towards the advertisers and away from the needs of the consumers,” and that advertising incentives can encourage lower-quality results. Despite this warning, the system optimized for what could be measured, targeted, and monetized at the expense of privacy, transparency, and long-term trust. These outcomes were not inevitable. They flowed from early design choices about how advertising worked, data moved, and influence was disclosed. A pivotal moment Artificial intelligence now finds itself at a similar pivotal moment, but under far greater economic pressure and with far higher stakes. It is worth noting, artificial intelligence is not cheap to run. OpenAI projected that it will burn through $115 billion by 2029. Like internet users, AI users are unwilling to pay for access, and advertising has historically allowed the internet, and businesses depending on it, to scale beyond paying users. If advertising is going to fund AI, personal data cannot be the fuel that powers it. If conversations on an AI platform leak into targeting data, users will stop trusting it and will start viewing it as a surveillance tool. Furthermore, once personal data becomes currency, the system inevitably optimizes for extraction. That does not mean future advertisers on these AI platforms would have to operate in the dark. Brands will still need to know that their spending delivers results, and that their messages target users aligned with their values. It’s justifiable that brands need outcome measurement and contextual assurance. The real problem The irony in Anthropic’s critique is instructive. A Super Bowl commercial is itself a testament to advertising’s enduring power as a form of communication and cultural signaling. Advertising is not the problem. Invisible incentives are. The way to satisfy both consumer trust and business growth is to build the advertising ecosystem on open, inspectable systems so that influence can be seen, measured, and governed without requiring the collection or exploitation of personal data. Standards such as the Ad Context Protocol sets out to do exactly this. This is the window in which profit can still be aligned with value. At stake is the difference between advertising as manipulation and advertising as sustainable and enduring market infrastructure. The ad-funded internet failed users not because it was free, but because its incentives were invisible. AI has the chance to do better. The choice is ours to make. View the full article
-
DP World boss leaves company after Epstein emails published
Sultan Ahmed bin Sulayem had transformed Dubai-based group into one of the world’s largest logistics operatorsView the full article
-
Hundreds may join E Mortgage Capital wage lawsuit
Hundreds of E Mortgage Capital employees, including loan officers, can opt-in to the complaint accusing the company of failing to pay them for overtime. View the full article
-
The U.S. government has 3,000 AI systems in place. Will they fix anything?
AI is upending business, our personal lives, and much more in between—including the operation of the U.S. government. In total, The Washington Post reported 2,987 uses of AI across the executive branch last year, hundreds of which are described as “high impact.” Some agencies have embraced the technology wholeheartedly. NASA has gone from 18 reported AI applications in 2024 to 420 in 2025; the Department of Health and Human Services, overseen by Robert F. Kennedy Jr., now reports 398 uses, up from 255 a year ago. The Department of Energy has seen a fourfold increase in AI usage, with a similar jump at the Commerce Department. Agencies were effectively given the green light in April 2025, when the White House announced it was eliminating barriers to AI adoption across the federal government. They appear to have taken that invitation seriously. Those numbers may raise eyebrows—or trigger concern among observers worried about bias, hallucinations, and lingering memories of the chaotic AI-enabled government overhaul associated with the quasi-official Department of Government Efficiency during Elon Musk’s brief orbit near the center of power. “It’s not clear using AI for most government tasks is necessary, or preferable to conventional software,” cautions Chris Schmitz, a researcher at the Hertie School in Berlin. “The digital infrastructure of the U.S. government, like that of many others, is a deeply suboptimal, dated, path-dependent patchwork of legacy systems, and using AI for ‘quick wins’ is frequently more of a Band-Aid than a sustainable modernization.” Others who have worked at the center of government digital innovation argue that alarmism may be misplaced. In fact, they say, experimenting with AI can be a form of smart governance—if done carefully. “It’s become apparent that we never really properly moved government into the internet era,” says Jennifer Pahlka, cofounder and chair of the board at the Recoding America Fund and former U.S. deputy chief technology officer under the Obama administration. “There have been real problems that have come out of that where government is just not meeting the needs of people in the way that it should.” Pahlka believes that experimentation with AI in government is “probably somewhat appropriate” given how early we are in the generative AI era. Testing is necessary to understand where—and where not—the technology can improve operations. “What you want, though, is ways of experimenting with this that gives you very clear and effective feedback loops, such that you are catching problems before it’s rolled out to large numbers of people or to have a large impact,” she says. Still, it is far from certain that AI systems will produce outcomes that serve all Americans equally. Denice Ross, executive fellow in applied technology policy at the University of California, Berkeley, warns that rigorous evaluation is essential. “The way government would find out if a tool is doing what it’s supposed to for the American people is by collecting and analyzing data about how it performs, and the outcomes for different populations,” says Ross, who served as chief data scientist in the White House from 2023 to 2024. The core issue, she says, is whether a given system is actually helping the people it’s meant to serve, or whether “some people [are] being left behind or harmed.” The only way to know is to look closely at the data. That might mean discovering, for example, that a tool works fine for digitally fluent users but falls short for people without high-speed internet or for older Americans. Public participation is also critical. “Getting the conditions for legitimate government AI use right is hard, and this work by and large has not been done,” the Hertie School’s Schmitz argues, noting that “there has been no real democratic negotiation of the legal basis for automated decision-making or build-out of oversight structures, for example.” There are also reasons to be cautious about rushed or poorly structured AI deployments, including reported plans at the Department of Transportation to experiment with tools like Google Gemini. Philip Wallach, a senior fellow at the American Enterprise Institute, argues that while the government should be exploring how rapid advances in AI can serve the public, it must do so without sacrificing democratic accountability. The priority, he suggests, should be preserving accountable human judgment in government decision-making before momentum and political expediency crowd it out. Looking at the government’s overall AI strategy, Pahlka says she sees some grounds for cautious optimism. From what she can tell, many of the early efforts appear focused on applying AI to bureaucratic bottlenecks and process slowdowns where it could meaningfully boost productivity. If that focus holds, she suggests, the payoff could be pretty useful. Still, she believes more care and attention to detail is needed—something the The President White House has not always demonstrated. “What I’m not sure I see is a questioning of the processes themselves,” she says, explaining that, in her view, thoughtful AI adoption requires asking whether a process should exist in its current form at all—not simply whether AI can accelerate one step within it. That distinction matters because poorly implemented AI can have real consequences. Government’s track record with large-scale technology deployments is uneven, and layering AI onto flawed systems could cause undue harm. “We have consistently rolled out technology in government in ways that have harmed people because we do not have test and learn frameworks as the fundamental way of approaching these problems,” Pahlka says. If done right, however, the opportunity is significant. AI could help government function more effectively, and more equitably, for everyone. View the full article
-
Anthony Edwards has a plan to get your attention
When Minnesota Timberwolves star Anthony Edwards steps onto the NBA All-Star court in Los Angeles with the league’s best players, there will be cameras following his every move. But it won’t just be NBC clocking the action. Edwards’s own Three-Fifths Media will be there for his ongoing unscripted show, Year Six. It’s the second season chronicling the daily grind of his NBA exploits, building on last year’s Year Five. Three-Fifths Media started in 2019, with Justin Holland, Edwards’s business partner and manager. They signed a production deal with Wheelhouse in 2024 to collaborate on projects like Year Six. So far, Three-Fifths has produced Serious Business, an unscripted show on Prime Video that challenges celebrities and athletes in their own domains, Year Five, and now Year Six, and the inaugural Believe That Awards, which aired in October on YouTube and had 167 million views across platforms in its first 48 hours. On the side, Edwards also produced a hip-hop album featuring heavyweights Pusha T, Quavo, and Wale. The 24-year-old Edwards is methodically building his own content and entertainment business clearly influenced by the success some of his on-court heroes have had over the past decade, like Kevin Durant with Boardroom and LeBron James with Fulwell Entertainment (formerly the SpringHill Co.). Of course, there is no guaranteed blueprint—witness SpringHill’s financial struggles, despite strong productions, that led to its merger with Fulwell last year. The two common threads among Three-Fifths Media’s projects is that they shine a spotlight on a real and (largely) unfiltered Anthony Edwards, and are at least partly owned by the NBA star. Holland says that’s not only at the core of their content, but the overall business strategy. “We’ve leaned into being authentic in every room we walk into, and prioritize ownership over exposure,” says Holland, who has been working with Edwards since 2016. “Not just looking for deals because of dollar amounts or because they’re cute, but also really leaning into brands that we really can take ownership in, allow us to keep that authenticity, and also look for opportunities where we can actually own our IP.” Just like Edwards’s on-court career, it’s been an impressive start, and shows potential to help redefine athlete-owned media. Believe That Okay, picture this: A remake of the 2001 film Training Day, starring Timothée Chalamet as Ethan Hawke’s character opposite NBA star Anthony Edwards in Denzel Washington’s spot. It sounds crazy, obviously, but Chalamet and Edwards actually talked about it in October when Edwards awarded the actor his “White Boy of the Year” honor as part of the satirical Believe That Awards show. The show didn’t feature a red carpet, nor was it drenched in celebrity—though Chalamet and Candace Parker made Zoom appearances. It was shot in Edwards’s actual basement, and had the feel of a Saturday night hang-out with him and his friends. That ability to seamlessly jump from highly produced work like Year Five, to more street-level, vlogger-style content is perhaps Edwards’s biggest media strength. “You have guys that impact culture, and then you have guys that create,” says Holland. ”Ant’s one of those guys that creates culture. So everything that we do, we’re intentional about not trying to follow the standard, and aim to actually be innovative in our creative process.” There’s a reason the vibe of hanging with Edwards and his friends permeates so much of his work (his best friend, Nick Maddox, stars in many of his Adidas spots)— it’s because that’s what’s really happening. “It is actually pretty easy when you have a guy like Anthony and our crew,” says Holland. “We keep everything really tailored to our core group and just want to make sure that we continue to build from there.” Brand consistent Holland says that, as a young up-and-coming NBA star, early in his career brands would try to fit him into their box or version of him they wanted. The work they’ve done with partners like Adidas, Sprite, Bose, and Prada represent those that have not only steered away from the old hold-the-product-and-smile approach, but encouraged Edwards to take ownership of the creative. Most modern athletes will talk about authentic connection with both brands and fans, but tend to serve up only the most curated and choreographed version of it. What makes Edwards work most unique is how it makes fans feel a part of that inner circle, whether in a social post or a big time sneaker ad. “We try to stay away from just brand endorsements and we really like to be in business with people that really understand who we are and then actually want to collaborate with us,” says Holland. That translates to having Maddox starring in Adidas ads, or Edwards’s brother’s music featured in a Bose campaign. It also brings Edwards’s natural affinity for trash talk to his brand work. Brands typically shy away from controversy, but Adidas has embraced Edwards’s approach wholeheartedly. They turned heads last year, launching his first signature shoe with ads that called out other pro shoe models and social media trolls by name. In a spot called “Top Dog” for his AE2 shoe, he beats video game caricatures of his biggest rivals—Luka Dončić, Victor Wembanyama, and Shai Gilgeous-Alexander, among others. Holland says getting brand partners to embrace Edwards’s authentic self was tougher at first, but the results speak for themselves. “We talk to our partners about our overall picture, looking at it from a wide lens of how we want to operate,” he says. “Now those conversations are a lot easier. They see how we move and how the public actually reacts to the authenticity, and how it resonates, because it just makes all the work that much more relatable.” View the full article
-
San Jose just made its buses 20% faster
Public transit could be on the verge of getting a whole lot more efficient. The Bay Area city of San Jose says it has improved public transportation by implementing an AI transit signal priority (TSP) system that makes its bus routes 20% faster and shortens ride times for passengers. An urban planning win, it also broadens the strategies available to other cities looking to improve their public transport. TSP systems are programs that make traffic lights responsive and adaptable to public transportation in real time. They can extend a green light to give buses an extra second to make it through an intersection or shorten a red light so they don’t have to wait as long. It’s similar to the higher-urgency emergency vehicle preemption (EVP) system for first responders. While EVP systems for ambulances, fire engines, and police cars can immediately change signals, TSP systems for buses or trains can only nudge them. The extra moments from those lower-priority nudges, though, can still make a meaningful difference in keeping buses operating on schedule. “By helping buses move more efficiently through intersections, the technology reduces delays, improves on-time performance, and shortens wait times for riders,” a statement from the city read. Cities have found other ways to reduce wait times for riders. AI lane enforcement that tickets vehicles driving in or blocking the bus lane cuts the number of illegally parked cars in a hurry. In London, buses have switched to contactless boarding, which led to improved boarding times. San Jose becomes one of several test cities San Jose’s TSP was developed by Lyt, a Northern California transit software company. Its software interacts with a transit agency’s traffic manager center via a computer called Maestro. Lyt’s system was piloted in San Jose beginning on just two Santa Clara Valley Transportation Authority (VTA) bus routes in 2023; now it’s used for 24 routes. Federal and state funds paid for a majority of the project. Lyt provided TSP software for buses in Portland, Oregon, in 2022 that reduced delays by 69%. Last September the company announced it would pilot its tech on four bus routes in Baltimore. Lyt did not respond to a request for comment. Lyt’s TSP technology uses criteria like routing information, traffic conditions, and vehicle location to predictively keep buses running on time. The company pitches its system as better and more cost effective than the analog prioritization method of dash-mounted strobes on buses that beam infrared or optical lights to traffic pole equipment. “Our cloud-based transit priority system takes the global picture of a route into account and uses machine learning to predict the optimal time to grant the green light to transit vehicles at just the right time,” Lyt founder and CEO Tim Menard said in a statement about the system when it expanded across more San Jose routes in 2023. Public transit garners new public interest City bus speeds have grown from being strictly transportation and infrastructure issues to something that resonates more broadly after New York City Mayor Zohran Mamdani won last year’s election in part on a campaign promise to make city buses faster and free to ride. It’s a promise Mamdani’s office says he intends to keep, even after the federal Department of Transportation developed a proposal to stop its transit funding for any city that provides free bus service, according to Politico—which represents a direct threat to the Mayor’s ambitious plans. Nevertheless, smarter systems that give buses a few extra seconds to make it through an intersection could be the edge that makes public transportation in cities across the country faster and more reliable. View the full article
-
UK ban on Palestine Action ruled unlawful
Ministers argued the direct action group engaged in a campaign of criminal damage and violenceView the full article
-
How women’s skiwear falls short when it comes to actually skiing
Marks & Spencer is one of the latest U.K. high-street brands to launch a skiwear collection. Even supermarket Lidl is in on the action, with items in its ski range priced at less than 5 pounds (roughly $6.75). This follows earlier moves by fast-fashion retailers such as Topshop, which launched SNO in the mid 2010’s, and Zara’s imaginatively titled Zara Ski collection, which launched in 2023. Fast-fashion brand PrettyLittleThing’s Apres Ski edit (a collection of clothes chosen for a specific theme) tells potential shoppers that going skiing is “not necessarily essential,” which is good, because many of the products in the collection are listed as athleisure, not sportswear. It’s not just the high street. Kim Kardashian’s shapewear brand Skims has recently collaborated with the North Face and has dressed Team USA for the 2026 Winter Olympics—though these are strictly designed to serve the athletes during downtime, not for the piste. Alongside dedicated skiwear lines, the apres-ski aesthetic has become a recurring seasonal trend over recent years, expanding well beyond the slopes. You may have noticed the slew of ski-themed sweatshirts across the market. One of these, an Abercrombie & Fitch sweatshirt, went viral in January after a buyer noticed that the depicted resort was actually Val Thorens, France—not Aspen, Colorado, as the text printed on the garment claimed. It is not only the quality of ski-themed fashion products that is a cause for concern, but also those designed for the slope. Many of these high-street collections have received criticism from consumers, with some claiming that the garments are “not fit for purpose.” Meanwhile, many influencers have taken to social media to warn their followers to avoid skiing in garments from fast-fashion brands. Such were the complaints that Zara Ski reportedly renamed its products “water resistant” instead of “waterproof.” These collections respond, in part, to a genuine need for women’s sportswear that is practical, fashionable, and, most critically, affordable. Ski and performance wear in general is costly, and such collections being both fashionable and relatively low-cost make for an attractive prospect. And yet, if these garments are so poorly suited to skiing, then what are they for? The visual allure of skiing Despite sports playing a key role in challenging gender ideology and perceptions of female physicality, the perceived importance of femininity and how women look while doing sports has lingered. Images of sportswomen frequently fixate on gender difference and femininity is foregrounded over athleticism. Here, the glamorous image of skiing has much to account for. Glamour relies on distance and difference to conjure a feeling of longing. For many, the novelty of eating fondue at 3,000 feet is out of reach, as is the ever-increasing price of a lift pass. Throughout the 20th century, the glamour of skiing has been defined by women’s fashion. In the 1920s, Vogue magazine featured illustrations of elongated skiing women on their covers. Designer Pucci’s aerodynamic one-piece ski suit premiered in Harper’s Bazaar magazine in 1947, while Moncler’s ski anoraks—photographed on Jackie Kennedy in 1966—gave birth to a vision of American ski “cool.” Changing ski fashions were recorded in photographer Slim Aarons’s resort photography, capturing the leisure class on and off piste between the 1950s and 1980s. Vogue Archive Women’s fashionable skiwear has taken many forms since the activity first became popular in the 1920s. It was during this decade that skiing became a marker of affluence. Leather, gaberdine, fur, and wool were popular materials in early women’s skiwear and were selected for their natural properties; water-repellence, insulation, breathability. By the mid-century, women’s skiwear became more focused on silhouette and excess fabric was considered unfeminine. Equally, skiwear gradually became more colourful, and in the fashion press women were even encouraged to match their lipstick to their ski ensemble. By the 1980s, skiwear aligned with the fashionable “wedge” silhouette; causing the shoulders of ski jackets to widen and salopettes (ski trousers with shoulder braces) to draw even tighter. These historic developments parallel today’s aesthetic ski trend where fashion and image arguably comes before function. For example, PrettyLittleThing’s models are photographed on fake slopes, holding vintage skis. The glamorous image of the skiing woman lies not only in the clothing but in her stasis. The suggestion is that ski culture does not necessarily require skiing at all: It may simply involve occupying the most visible terrace, Aperol in hand. No wonder then, that so many fast-fashion ski lines for women are deeply impractical—they appear designed less for physical exertion than for visual consumption. They sell women on the alluring glamour of skiing, while leaving them out in the cold. There is an additional irony here: Climate change means that skiing is becoming increasingly exclusive. Lower-level resorts are closing as the snow line moves up, meaning fewer options and increased demand. In this sense, the image of skiing looks to become even more glamorous via increasing inaccessibility and therefore distance. Fast-fashion has a negative impact on the environment, and the ski aesthetic risks damaging the very thing it claims to celebrate. This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org, The Conversation UK may earn a commission. Tamsin Johnson is a PhD candidate in visual cultures at Nottingham Trent University. This article is republished from The Conversation under a Creative Commons license. Read the original article. View the full article
-
How to let go of resentment on the job
No matter how much you like your coworkers, you’re going to have some conflicts with them. Most of those conflicts involve differences of opinion or approach. A colleague may do something that irks you or causes difficulties for the work you’re doing. While those conflicts may lead to tension for some period, you typically get beyond those difficulties and may even wind up with a closer relationship to them later. But, there are some colleagues where anger hardens into resentment. That can cause real workplace problems, because you’re going to have to engage with that colleague which can get in the way of a project’s success. Plus, no matter how good you think you are at hiding your resentments, chances are your feelings for that person shine through in your engagements with them as well as your conversations about them. Not only will those resentments make projects harder to do, they can also stand in the way of your success in your organization. After all, most promotions involve moving up in leadership. Companies like to promote individuals they think will bring people together rather than dividing them. Your resentments mark you as a source of division rather than unity. So, how can you get over a resentment? After all, you can’t just wave a magic wand and have your feelings go away. Talk it out The best strategy for dealing with resentments is to talk about it with your colleague. When someone has done something that continues to bother you, it can be valuable to clear the air. Conversations like this aren’t always an option, but if they are the can be quite effective in moving your relationship forward (even if they are uncomfortable in the moment). Invite your colleague out for coffee. Your colleague might be surprised by this invitation, because (chances are) they know that you are annoyed at them. Let them know that what they did, how it affected you, and why you are still upset about it. Before you have that conversation, you should actually practice saying all of this so that you have words to describe it clearly. Don’t wing it. This strategy can be helpful for a few reasons. First, there are times where you say your grievance it out loud when practicing it and then realize that the problem here is you. That is, you may discover that you have been making a bigger deal out of something than it is worth. Second, there are times when the other party doesn’t realize the impact their actions had on you. This conversation may help them to better recognize the impact of what they do on others. Third, this conversation is likely to help you to see the event from a different perspective. When you talk out a complicated interaction, you may find that the other person’s actions were completely sensible from their perspective, while you had been feeling like they had bad intent. Forgive (and forget) Another powerful tool for dealing with resentment is to forgive the other person. That resentment you’re carrying is fundamentally about your reaction to that person as a result of your reaction to them. When you see them or think about them, you are reminded of what they did, and the bad feeling wells up again. When you forgive someone else, you are acknowledging what they did and the bad impact it had, and then you are accepting that action. Research suggests that forgiveness primarily benefits the forgiver. In particular, when you forgive someone, it dampens the negative emotions you experience later. It also makes some of the details of what the other person did less memorable. So, by forgiving the other person, you are taking an important step toward enabling that resentment to have less impact on your behavior in the future than it does now. Look in the mirror If you find yourself unable to talk with the other person or to forgive them, it is time to take a look at yourself. No matter how good a person you are or how much you strive to be a good colleague, you have probably had some moments where your actions harmed someone else. Because you like to think of yourself as a good person, you probably focus less on your bad moments than on your good ones. As a result, you may not remember some of the times that your actions had a negative impact on others. When you call to mind a few instances of your own less-than-stellar behavior, it can sometimes open you up to forgiving someone else. It can be particularly helpful if you think about times that other people have forgiven you for something you did. Imagine what your life would be like if everyone resented you for things you did in your worst moments. Recognize that your own career and success is owed in part to the willingness of others to forgive you. Finally, just because you forgive someone or let go of a resentment doesn’t mean you have to trust them blindly. If someone has treated you badly in the past and you are not convinced that they are reformed, you should still be vigilant when you work with them in the future. You can be careful in your engagements with a colleague while still treating them cordially and respectfully. View the full article
-
How to meet the surging energy demand without needing as much new electricity
This story was originally published by Grist. Sign up for Grist’s weekly newsletter here. The conversation around energy use in the United States has become . . . electric. Everyone from President Donald The President to the cohosts of Today show has been talking about the surging demand for, and rising costs of, electrons. Many people worry that utilities won’t be able to produce enough power. But a report released today argues that the better question is: Can we use what utilities already produce more efficiently in order to absorb the coming surge? “A lot of folks have been looking at this from the perspective of, Do we need more supply-side resources and gas plants?” said Mike Specian, utilities manager with the nonprofit American Council for an Energy-Efficient Economy, or ACEEE, who wrote the report. “We found that there is a lack of discussion of demand-side measures.” When Specian dug into the data, he discovered that implementing energy-efficiency measures and shifting electricity usage to lower-demand times are two of the fastest and cheapest ways of meeting growing thirst for electricity. These moves could help meet much, if not all, of the nation’s projected load growth. Moreover, they would cost only half—or less—what building out new infrastructure would, while avoiding the emissions those operations would bring. But Specian also found that governments could be doing more to incentivize utilities to take advantage of these demand-side gains. “Energy efficiency and flexibility are still a massive untapped resource in the U.S.,” he said. “As we get to higher levels of electrification, it’s going to become increasingly important.” The report estimated that by 2040, utility-driven efficiency programs could cut usage by about 8 percent, or around 70 gigawatts, and that making those cuts currently costs around $20.70 per megawatt. The cheapest gas-fired power plants now start at about $45 per kilowatt generated. While the cost of load shifting is harder to pin down, the report estimates moving electricity use away from peak hours—often through time-of-use pricing, smart devices, or utility controls—to times when the grid is less strained and power is cheaper could save another 60 to 200 gigawatts of power by 2035. That alone would far outweigh even the most aggressive near-term projections for data center capacity growth. Vijay Modi, director of the Quadracci Sustainable Engineering Laboratory at Columbia University, agrees that energy efficiency is critical but isn’t sure how many easy savings are left to be had. He also believes that governments at every level—rather than utilities—are best suited to incentivize that work. He sees greater potential in balancing loads to ease peak demand. “This is a big concern,” he said, explaining that when peak load goes up, it could require upgrading substations, transformers, power lines, and a host of other distribution equipment. That raises costs and rates. Utilities, he added, are well positioned to solve this because they have the data needed to effectively shift usage and are already taking steps in that direction by investing in load management software, installing battery storage and generating electricity closer to end users with things like small-scale renewable energy. “It defers some of the heavy investment,” said Modi. “In turn, the customer also benefits.” Specian says that one reason utilities tend to focus on the supply side of the equation is that they can often make more money that way. Building infrastructure is considered a capital investment, and utilities can pass that cost on to customers, plus an additional rate of return, or premium, which is typically around 10 percent. Energy-efficiency programs, however, are generally considered an operating expense, which aren’t eligible for a rate of return. This setup, he said, motivates utilities to build new infrastructure rather than conserve energy, even if the latter presents a more affordable option for ratepayers. “Our incentives aren’t properly lined up,” said Specian. State legislators and regulators can address this, he said, by implementing energy-efficiency resource standards or performance-based regulation. “Decoupling,” which separates a company’s revenue from the amount of electricity it sells, is another tactic that many states are adopting. Joe Daniel, who runs the carbon-free electricity team at the nonprofit Rocky Mountain Institute, has also been watching a model known as “fuel cost sharing,” which allows utilities and ratepayers to share any savings or added costs rather than passing them on entirely to customers. “It’s a policy that seems to make logical sense,” he said. A handful of states across the political spectrum have adopted the approach, and of the people he’s spoken with or heard from, Daniel said “every consumer advocate, every state public commissioner, likes it.” The Edison Electric Institute, which represents all of the country’s investor-owned electric companies, told Grist that regardless of regulation, utilities are making progress in these areas. “EEI’s member companies operate robust energy-efficiency programs that save enough electricity each year to power nearly 30 million U.S. homes,” the organization said in a statement. “Electric companies continue to work closely with customers who are interested in demand response, energy efficiency, and other load-flexibility programs that can reduce their energy use and costs.” Because infrastructure changes happen on long timelines, it’s critical to keep pushing on these levers now, said Ben Finkelor, executive director of the Energy and Efficiency Institute at the University of California, Davis. “The planning is 10 years out,” he said, adding that preparing today could save billions in the future. “Perhaps we can avoid building those baseload assets.” Specian hopes his report reaches legislatures, regulators, and consumers alike. Whoever reads it, he says the message should be clear. —By Tik Root This article originally appeared in Grist. Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org. View the full article
-
Why world models will become a platform capability, not a corporate superpower
For the past two years, artificial intelligence has felt oddly flat. Large language models spread at unprecedented speed, but they also erased much of the competitive gradient. Everyone has access to the same models, the same interfaces, and, increasingly, the same answers. What initially looked like a technological revolution quickly started to resemble a utility: powerful, impressive, and largely interchangeable, a dynamic already visible in the rapid commoditization of foundation models across providers like OpenAI, Google, Anthropic, and Meta. That flattening is not an accident. LLMs are extraordinarily good at one thing—learning from text—but structurally incapable of another: understanding how the real world behaves. They do not model causality, they do not learn from physical or operational feedback, and they do not build internal representations of environments, important limitations that even their most prominent proponents now openly acknowledge. They predict words, not consequences, a distinction that becomes painfully obvious the moment these systems are asked to operate outside purely linguistic domains. The false choice holding AI strategy back Much of today’s AI strategy is trapped in binary thinking. Either companies “rent intelligence” from generic models, or they attempt to build everything themselves: proprietary infrastructure, bespoke compute stacks, and custom AI pipelines that mimic hyperscalers. That framing is both unrealistic and historically illiterate. Most companies did not become competitive by building their own databases. They did not write their own operating systems. They did not construct hyperscale data centers to extract value from analytics. Instead, they adopted shared platforms and built highly customized systems on top of them, systems that reflected their specific processes, constraints, and incentives. AI will follow the same path. World models are not infrastructure projects World models, systems that learn how environments behave, incorporate feedback, and enable prediction and planning, have a long intellectual history in AI research. More recently, they have reemerged as a central research direction precisely because LLMs plateau when faced with reality, causality, and time. They are often described as if they required vertical integration at every layer. That assumption is wrong. Most companies will not build bespoke data centers or proprietary compute stacks to run world models. Expecting them to do so repeats the same mistake seen in earlier “AI-first” or “cloud-native” narratives, where infrastructure ambition was confused with strategic necessity. What will actually happen is more subtle and more powerful: World models will become a new abstraction layer in the enterprise stack, built on top of shared platforms in the same way databases, ERPs, and cloud analytics are today. The infrastructure will be common. The understanding will not. Why platforms will make world models ubiquitous Just as cloud platforms democratized access to large-scale computation, emerging AI platforms will make world modeling accessible without requiring companies to reinvent the stack. They will handle simulation engines, training pipelines, integration with sensors and systems, and the heavy computational lifting—exactly the direction already visible in reinforcement learning, robotics, and industrial AI platforms. This does not commoditize world models. It does the opposite. When the platform layer is shared, differentiation moves upward. Companies compete not on who owns the hardware, but on how well their models reflect reality: which variables they include, how they encode constraints, how feedback loops are designed, and how quickly predictions are corrected when the world disagrees. Two companies can run on the same platform and still operate with radically different levels of understanding. From linguistic intelligence to operational intelligence LLMs flattened AI adoption because they made linguistic intelligence universal. But purely text-trained systems lack deeper contextual grounding, causal reasoning, and temporal understanding, limitations well documented in foundation-model research. World models will unflatten it again by reintroducing context, causality, and time, the very properties missing from purely text-trained systems. In logistics, for example, the advantage will not come from asking a chatbot about supply chain optimization. It will come from a model that understands how delays propagate, how inventory decisions interact with demand variability, and how small changes ripple through the system over weeks or months. Where competitive advantage will actually live The real differentiation will be epistemic, not infrastructural. It will come from how disciplined a company is about data quality, how rigorously it closes feedback loops between prediction and outcome (Remember this sentence: Feedback is all you need), and how well organizational incentives align with learning rather than narrative convenience. World models reward companies that are willing to be corrected by reality, and punish those that are not. Platforms will matter enormously. But platforms only standardize capability, not knowledge. Shared infrastructure does not produce shared understanding: Two companies can run on the same cloud, use the same AI platform, even deploy the same underlying techniques, and still end up with radically different outcomes, because understanding is not embedded in the infrastructure. It emerges from how a company models its own reality. Understanding lives higher up the stack, in choices that platforms cannot make for you: which variables matter, which trade-offs are real, which constraints are binding, what counts as success, how feedback is incorporated, and how errors are corrected. A platform can let you build a world model, but it cannot tell you what your world actually is. Think of it this way: Every company using SAP does not have the same operational insight. Every company running on AWS does not have the same analytical sophistication. The infrastructure is shared; the mental model is not. The same will be true for world models. Platforms make world models possible. Understanding makes them valuable. The next enterprise AI stack In the next phase of AI, competitive advantage will not come from building proprietary infrastructure. It will come from building better models of reality on top of platforms that make world modeling ubiquitous. That is a far more demanding challenge than buying computing power. And it is one that no amount of prompt engineering will be able to solve. View the full article
-
Bing Now Shows Which Pages Get Cited in AI Answers
Bing Webmaster Tools now shows how often your content is cited in AI answers. See what the dashboard tracks, what‘s missing, and how to act on the data. View the full article
-
Work Breakdown Structure (WBS) Guide: Examples, Templates & Methods
A Work Breakdown Structure (WBS) is a hierarchical breakdown of the tasks required to complete a project. Learn how it can help you manage your projects. The post Work Breakdown Structure (WBS) Guide: Examples, Templates & Methods appeared first on project-management.com. View the full article
-
Brits and Europeans bumping into each other makes Heathrow feel busy, says boss
Thomas Woldbye says passengers at UK’s only hub airport are often in ‘the wrong place’ View the full article
-
If AI is doing the work, leaders need to redesign jobs
Most managers are using AI the same way they use any productivity tool: to move faster. It summarizes meetings, drafts responses, and clears small tasks off the plate. That helps, but it misses the real shift. The real change begins when AI stops assisting and starts acting. When systems resolve issues, trigger workflows, and make routine decisions without human involvement, the work itself changes. And when the work changes, the job has to change too. Let’s take the example of an airline and lost luggage. Generative AI can explain what steps to take to recover a lost bag. Agentic AI aims to actually find the bag, reroute it, and deliver it. The person that was working in lost luggage, doing these easily automated tasks, can now be freed to become more of a concierge for these disgruntled passengers. As agentic AI solves the problem, the human handles the soft skills of apologizing, and offering vouchers to smooth the passenger’s transition to a new locale that was disrupted by a misplaced bag, and perhaps going a step further to make personal recommendations for local shops to pick up supplies. With AI moving from reporting information to taking action, leaders can now rethink how jobs are designed, measured, and supported to best maximize on the potential of the position and the abilities of the person in it. According to data from McKinsey, 78% percent of respondents have said their organizations use AI in at least one business function. Though some are still applying it on top of existing roles rather than redesigning work around it. 1. When tasks disappear, judgment becomes the job Many roles are still structured around task lists: answer tickets, process requests, close cases. As AI takes on more repeatable execution, what remains for humans are exceptions, tradeoffs, and judgment calls that don’t come with a script. Take for example a member of the service team at a car dealership. Up until now the majority of their tasks have been scheduling appointments, sending follow-up emails, making follow-up calls and texts. Agentic AI can remove the bulk of that work. Now that member of the team can make the decisions that require nuance and critical thinking. They know that the owner of a certain vehicle is retired and has trouble getting around. They can see that their appointment is on a morning when it might snow. The human then calls the customer and rebooks them for when the weather is more favorable. These sorts of human touches are what will now set this dealership apart and grow customer loyalty. 2. Measure what humans now contribute As AI absorbs volume, measuring people on speed and responsiveness pushes them to compete with machines on machine strengths. Instead, evaluation should reflect what humans uniquely provide: quality of judgment, ability to prevent repeat issues, and stewardship of systems that learn over time. In the example above, the service team member at the car dealership could now be assessed not by number of appointments set, or cancellations rescheduled, but by outcomes such as customer satisfaction, and repeat business. The KPIs should be in-person or over the phone touch points with a customer to up-sell, or suggest better services that their vehicle will need. 3. Human accountability for AI work When AI is involved, ownership has to be explicit. Someone must own outcomes, even if a system takes the action. Someone must own escalation rules, workflows, and reviews. Without that clarity, AI doesn’t reduce friction, it just shifts it to the moment something goes wrong. In the car dealership example, a human should still be overseeing the AI agents doing the work and ensuring that it’s done well. If there are problems, they should be able to catch them and come up with solutions. One of the biggest risks with AI isn’t failure, it’s neglect from humans overseeing the overall strategy and bigger goals that the AI is completing. Systems that “mostly work” fade into the background until they don’t. Teams need protected time to review where AI performed well, where it struggled, and why. Looking ahead This shift isn’t theoretical. Klarna has publicly described how its AI assistant now handles a significant share of customer service interactions, an example of how quickly AI moves from support tool to frontline worker. Once AI is doing real work, the old job descriptions stop making sense. Roles, accountability, metrics, and oversight all need to be redesigned together. AI improves fastest when humans actively review and guide it, not when oversight is treated as an afterthought. The next phase of work isn’t about managing people plus tools. It’s about designing systems where expectations are clear, ownership is explicit, humans focus on meaningful decisions, and AI quietly handles the rest. If leaders don’t redesign the job intentionally, it will be redesigned for them, by the technology, by urgent failures, and by the slow erosion of clarity inside their teams. View the full article
-
my boss thinks our obnoxious coworker is funny, medical tech proselytized to me, and more
It’s four answers to four questions. Here we go… 1. A medical tech repeatedly proselytized to me An experience I had recently with a medical provider has me wondering if what I felt to be inappropriate and unprofessional is a behavior worth raising with my doctor, who owns the practice. I live in area of the south where most people assume that everyone is Christian and believes in God — the kind of place where wishing someone “Happy Holidays” is likely to result in a tonally aggressive reply of “Merry Christmas.” Usually I let religious speak in various businesses just roll off me. I recently underwent TMS treatment for chronic, major depression. As part of that, I received 36 treatments that required me to go to my psychiatrist’s office every weekday for five-minute sessions with one of the techs. Early in the treatment, the tech would reference God and how he helped her, and I just let it ride and wouldn’t engage. But by the final two weeks, she escalated to asking me about my own beliefs. I eventually told her I’m not religious. She spent the next few sessions telling me that if I would just let God into my life, that would make all the difference. I expressed discomfort with the topic (clearly and directly), but she persisted. So my question is whether this is worth mentioning to the psychiatrist on my next visit. This is most definitely not a religiously-affiliated practice. Part of me feels terrible about the idea of getting her in trouble. I do believe she meant well. Plus, I have to go to the office every few months and will likely encounter her as she is in the front office when not administering treatments. So that could be awkward. But I’m also highly annoyed that I was repeatedly proselytized to while essentially a captive audience. What do you think? Would you want this behavior reported to you if it were your employee? Without any doubt whatsoever, I would strongly want to know about it! In fact, I would be horrified if I found out this had been going on and no one had told me. Hopefully your doctor feels the same way. The tech is representing the medical practice and the doctor; she’s not there to proselytize, and you’re not there to be proselytized to. It would be wildly inappropriate under any circumstances, but the fact that she persisted after you asked her to stop makes it even worse. Tell your doctor what happened. Say it was frequent and persistent, and she didn’t stop after you asked her to, and say that you don’t come there to be proselytized at. 2. My boss thinks our obnoxious, racist coworker is funny My workplace has become increasingly toxic due to poor management and enabling of inappropriate behavior. Our manager is a bully who operates by singling out team members while cultivating favorites and gossiping about colleagues. Her current favorite is Ryan, a 25-year-old man in his first professional role who has been with the team for two years. While Ryan is fundamentally a nice person, he lacks professional maturity. The rest of the team consists of women at least twice his age, some of whom actively encourage his behavior because they want to be in his good graces. Because Ryan is protected by our manager, he faces no consequences for increasingly disruptive behavior: * Constant crude humor (fart jokes throughout the day) * Physical pranks (lowering colleagues’ chairs while they’re working) * Graphic discussions of his sex life * Showing explicit images to female colleagues * Making racist and anti-immigrant comments When I’ve tried to address this, some colleagues tell me I’m being “uptight” and that he “improves the vibe.” Our manager witnesses much of this behavior and either laughs along or gives him minimal warnings. I’m concerned that making a formal complaint will result in workplace retaliation, both from the manager and from colleagues who see Ryan as popular. How can I professionally address his behavior without isolating myself or becoming a target? How’s your HR? Ideally you’d report what’s happening to HR (meaning both Ryan and your manager) and specifically say that you’re concerned about retaliation from your manager and coworkers for reporting it, and ask them to take clear steps to ensure that doesn’t happen. Legally, they’re obligated to do that; permitting a manager to retaliate against an employee for making a good-faith report of harassment or discrimination is illegal — and employment lawyers will tell you that retaliation can be a lot easier to prove than harassment or discrimination is. But companies break the law in this area all the time, so you’d want to have some idea of how your company’s HR handles things. If HR isn’t an option, the other option is to call it out in the moment and not be deterred by coworkers saying you’re too uptight. Sample language: * “I don’t want to hear about your sex life. Please stop talking about it.” * “Don’t use language like that around me.” * “That’s an awful thing to say.” * “You could hurt someone doing that, and you’re putting the company at legal risk.” * “If you show me photos like that again, I’ll ask HR to tell you to stop.” * “This is getting really boring.” But there’s no way to push back on Ryan that guarantees you won’t become a target yourself, particularly with the sort of manager you described. Can you work on getting out of there? For what it’s worth, I’m pretty skeptical that Ryan is a nice person. Related: how to deal with a racist coworker is it worth going to HR about a bad manager? 3. When the reference-checker is an employee I fired At a former job, two employees on my team were Philip and Elizabeth. Elizabeth’s work was okay, but she was a toxic personality and I ended up terminating her employment. (There is of course more to this story but it isn’t relevant to my question.) Philip and Elizabeth were peers and I believe got on fine. Philip was a great employee. He and I have since also both left for other companies. Philip reached out asking me to be a reference for a new job, and I am very happy to do so. However I just heard from the recruiter with his potential new employer and the person they want to set me up to talk about Philip with is Elizabeth, who now works there. I fired her not quite two years ago, and I absolutely do not want to talk to her. Nor can I imagine she’d want to talk to me. And I don’t want to harm Philip’s chances. He knows I fired Elizabeth but not any specifics. What do I do? I’m leaning toward telling the recruiter I’m happy to recommend Philip but Elizabeth and I have a negative history. But obviously this employer must like Elizabeth so I’m concerned anything I say will reflect badly on Philip. Tell Philip he should find another reference? Help! I agree with your instincts! Tell the recruiter that you enthusiastically recommend Philip but that you have a complicated history with Elizabeth, having worked together in the past, and so you wonder if there’s someone else there who you could offer the reference to instead. If the recruiter says Elizabeth is the only option — well, ideally you’d suck it up and do it … but if you think that’s likely to harm Philip’s chances, then at that point you should lay it out for him and ask how he’d like to handle it. Sample language for that: “I’m happy to give anyone who asks a glowing reference for you but, between the two of us, there’s some tension between Elizabeth and me, and I don’t want that to hurt your chances at this job. Would you like me to go ahead and talk to her, or would you rather give them someone else to speak to?” 4. Does “don’t take a counteroffer” apply when both offers are internal? I really appreciated the post that gathered all of your advice on counter offers together in one place! I’ve been curious whether your advice changes when the second offer is an internal one? How do you approach things when you’ve been holding out for and/or been promised a promotion or a new role that’s taking forever to materialize — but accepting an interview (or getting an offer, keep your fingers crossed for me!) in another department gets your current leader to make the dangled promised position materialize? Do the same principles apply as when it’s two companies vying for you? A lot of the same principles apply: you still want to ask yourself why it took you being ready to leave for your manager to get it together for you, and whether it’ll be a similar battle to get other things you’ve earned in the future. And the same caveats apply about making sure they’re really going to follow through on their promises, not resume dragging their feet once the immediate crisis of you leaving is averted. The piece that can be different is that your company is less likely to see you as “disloyal” (a ridiculous concept regardless) — but you should weight the other factors pretty heavily. The post my boss thinks our obnoxious coworker is funny, medical tech proselytized to me, and more appeared first on Ask a Manager. View the full article
-
Trump plans to roll back tariffs on metal and aluminium goods
Latest softening of levies comes amid persistent voter anxiety about affordability in the USView the full article
-
Bankers push to avoid US regulator taking charge of British supervisor
Michael Hsu is among the frontrunners to succeed Sam Woods at BoE Prudential Regulation Authority View the full article
-
Schroders is the defining deal of a glass half-empty UK
Asset manager is ending its listed life with a whimper rather than a bang View the full article
-
Schroders boss reassured UK Treasury ahead of £9.9bn US takeover
Richard Oldfield says sale of centuries-old institution to US investment firm is a ‘good deal for the UK’View the full article