Everything posted by ResidentialBusiness
-
More storms to hit central U.S. after deadly storms and tornado damage
More severe storms were expected to roll across the central U.S. this week following the weather-related deaths of more than two dozen people and a devastating Kentucky tornado. The National Weather Service said a “multitude of hazardous weather” would impact the U.S. over the next several days—from thunderstorms and potentially baseball-size hail on the Plains, to heavy mountain snow in the West and dangerous heat in the South. Areas at risk of thunderstorms include communities in Kentucky and Missouri that were hit by Friday’s tornadoes. In London, Kentucky, people whose houses were destroyed scrambled Sunday to put tarps over salvageable items or haul them away for safe storage, said Zach Wilson. His parents’ house was in ruins and their belongings scattered. “We’re trying the hardest to get anything that looks of value and getting it protected, especially pictures and papers and things like that,” he said. Here’s the latest on the recent storms, some tornado history, and where to look out for the next weather impacts. Deadly storms claim dozens of lives At least 19 people were killed and 10 seriously injured in Kentucky, where a tornado on Friday damaged hundreds of homes and tossed vehicles in southeastern Laurel County. Officials said the death toll could rise and that three people remained in critical condition Sunday. Wilson said he raced to his parents’ home in London, Kentucky, after the storm. “It was dark and still raining, but every lightning flash, it was lighting up your nightmares: Everything was gone,” he said. “The thankful thing was me and my brother got here and got them out of where they had barricaded themselves.” Survey teams were expected on the ground Monday so the state can apply for federal disaster assistance, Gov. Andy Beshear said. Some of the two dozen state roads that had closures could take days to reopen. In St. Louis, five people died and 38 were injured as the storm system swept through on Friday, according to Mayor Cara Spencer. More than 5,000 homes in the city were affected, she said. On Sunday, city inspectors were going through damaged areas to condemn unsafe structures, Spencer said. She asked for people not to sightsee in damaged areas. A tornado that started in the St. Louis suburb of Clayton traveled at least 8 miles (13 kilometers), had 150-mph (241-kph) winds, and had a maximum width of 1 mile (1.6 kilometers), according to the weather service. It touched down in the area of Forest Park, home to the St. Louis Zoo and the site of the 1904 World’s Fair and the Olympic Games that same year. In Scott County, about 130 miles (209 kilometers) south of St. Louis, a tornado killed two people, injured several others, and destroyed multiple homes, Sheriff Derick Wheetley wrote on social media. The weather system spawned tornadoes in Wisconsin and temporarily enveloped parts of Illinois—including Chicago—in a pall of dust. Two people were killed in the Virginia suburbs of Washington, D.C., by falling trees while driving. The storms hit after the The President administration cut staffing of weather service offices, with outside experts worrying about how it would affect warnings in disasters such as tornadoes. A history of tornadoes The majority of the world’s tornadoes occur in the U.S., which has about 1,200 annually. Researchers in 2018 found that deadly tornadoes were happening less frequently in the traditional “Tornado Alley” of Oklahoma, Kansas, and Texas, and more frequently in parts of the more densely populated and tree-filled South. They can happen any time of day or night, but certain times of the year bring peak “tornado season.” That’s from May into early June for the southern Plains, and earlier in the spring on the Gulf Coast. The deadliest tornado in Kentucky’s history was hundreds of yards wide when it tore through downtown Louisville’s business district in March 1890, collapsing multistory buildings including one with 200 people inside. Seventy-six people were killed. The last tornado to cause mass fatalities in Kentucky was a December 2021 twister that lasted almost five hours. It traveled some 165 miles (266 kilometers), leaving a path of destruction that included 57 dead and more than 500 injured, according to the weather service. Officials recorded at least 41 tornadoes during that storm, which killed at least 77 people statewide. On the same day, a deadly tornado struck the St. Louis area, killing six people at an Amazon facility in nearby Illinois. More storms threaten in coming days Thunderstorms with potentially damaging winds were forecast for a region stretching from northeast Colorado to central Texas. And tornadoes will again be a threat particularly from central Kansas to Oklahoma, according to the weather service. Meanwhile, triple-digit temperatures were forecast for parts of south Texas, with the potential to break daily records. The hot, dry air also sets the stage for critical wildfire conditions through early this week in southern New Mexico and West Texas. Up to a foot of snow was expected in parts of Idaho and western Montana. —Matthew Brown and Carolyn Kaster, Associated Press View the full article
-
Google Ads bug stalls spending for New Customer Acquisition
A confirmed bug in Google Ads caused New Customer Acquisition (NCA) campaigns to stop spending budgets starting May 15. The issue has affected advertisers relying solely on this bidding strategy to reach new customers. The details: Google acknowledged the problem in an email shared by Google Ads consultant Benoit Legendre. “We are aware of an issue where the campaigns running on New Customer Acquisition only, have stopped spending starting May 15th,” Google said. The company added that its engineering team is actively working on a fix, but initially provided no specific timeline. Why we care. Advertisers using NCA-only bidding have seen campaign performance stall for days, potentially disrupting customer acquisition goals and monthly spend pacing. Google Ads is a critical advertising platform. Issues like this can have wide-reaching financial and strategic implications. The workaround. While the fix was pending, Google advised advertisers to temporarily switch bidding from “New Customer Acquisition only” to “New and existing customers.” This change can resume campaign delivery until the issue is resolved. The update. As of 11:45am ET on May 19, Google’s Ads Liaison Ginny Marvin confirmed that the engineering team “fully mitigated the issue.” NCA campaign performance should now begin returning to normal. First seen. The bug was first brought to broader attention via Legendre’s LinkedIn post, spotted by PPC News Feed and shared by Hana Kobzová. View the full article
-
What Ginnie Mae's newest executive is planning for mortgages
COO Joseph Gormley weighed in on cuts at the securitization guarantor and efforts to improve the industry's efficiencies and the government's. View the full article
-
Lenders feel better, not exuberant, about housing market
The Mortgage Bankers Association latest forecast reflects the industry's current views on where their business is going, said Mike Fratantoni. View the full article
-
HR panicked my employee by sending a mysterious meeting request right before the weekend
A reader writes: We received and validated some complaints about language used by a member of my team — off-color jokes, insensitive comments, etc. I agreed with HR that this did not rise to the level of a formal warning, but we would have a documented sit-down with the associate to explain it wasn’t acceptable and should not happen again, and further instances would have escalating consequences. Before this, the employee was a high performer without issues. HR scheduled the meeting on Friday for the following Monday with a very generic subject line and said that she wished to discuss “communication” and included my manager in the invite as a courtesy (she is aware of the situation and supports the approach). My employee immediately rang me, asking what the topic was. I explained as best as I could and that we would go into details together. But I am not keen on the communication on this topic. I would have preferred to raise this in our regular 1:1 meeting and then follow up with an email including all and summarizing the topic. Am I right to think that this approach should have been more transparent up-front, especially over the weekend? Yes, absolutely. Most people who receive a mysterious request to meet with HR, their manager, and their manager’s manager the following week with no details about the topic and the subject line “communication” would be a little concerned, at a minimum. Others would be full-on panicking. Leaving that hanging over them all weekend with no information is unkind. And yes, this is something you should be able to handle on your own in a one-on-one meeting, anyway. If HR wants to be there, fine — but they should have coordinated with you about how it would be handled and not sent this cryptic email on their own. It’s crappy. It sounds like once your employee asked you about it, you told them the basics, which was the right move rather than compounding the mystery and refusing to explain. Ideally at that point you’d say something like, “We’ve had some complaints about some language you’ve used and we want to clarify what is and isn’t okay. As long as we come out of that meeting on the same page I don’t expect it will need to be addressed again after that.” That way they know the topic and they’re also clear that they’re not about to be fired. You have plenty of standing to tell HR that you think this was a bad way to handle it and that it unnecessarily panicked the employee, and ask that they coordinate with managers on this sort of communication in the future. The post HR panicked my employee by sending a mysterious meeting request right before the weekend appeared first on Ask a Manager. View the full article
-
What the post-Brexit reset deal means for the UK
Agreement ranges over food, fishing, defence and youth mobility, but much remains to be finalisedView the full article
-
AI and Work (Some Predictions)
One of the main topics of this newsletter is the quest to cultivate sustainable and meaningful work in a digital age. Given this objective, it’s hard to avoid confronting the furiously disruptive potentials of AI. I’ve been spending a lot time in recent years, in my roles as a digital theorist and technology journalist, researching and writing about this topic, so it occurred to me that it might be useful to capture in one place all of my current thoughts about the intersection of AI and work. The obvious caveat applies: these predictions will shift — perhaps even substantially — as this inherently unpredictable sector continues to evolve. But here’s my current best stab at what’s going on now, what’s coming soon, and what’s likely just hype. Let’s get to it… Where AI Is Already Making a Splash When generative AI made its show-stopping debut a few years ago, the smart money was on text production becoming the first killer app. For example, business users, it was thought, would soon outsource much of the tedious communication that makes up their day — meeting summaries, email, reports — into AI tools. A fair amount of this is happening, especially when it comes to lengthy utilitarian communication where the quality doesn’t matter much. I recently attended a men’s retreat, for example, and it was clear that the organizer had used ChatGPT to create the final email summarizing the weekend schedule. And why not? It got the job done and saved some time. It’s becoming increasingly clear, however, that for most people the act of writing in their daily lives isn’t a major problem that needs to be solved, which is capping the predicted ubiquity of this use case. (A survey of internet users found that only around 5.4% had used ChatGPT to help write emails and letters. And this includes the many who maybe experimented with this capability once or twice before moving on.) The application that has instead leaped ahead to become the most exciting and popular use of these tools is smart search. If you have a question, instead of turning to Google you can query a new version of ChatGPT or Claude. These models can search the web to gather information, but unlike a traditional search engine, they can also process the information they find and summarize for you only what you care about. Want the information presented in a particular format, like a spreadsheet or a chart? A high-end model like GPT-4o can do this for you as well, saving even more extra steps. Smart search has become the first killer app of the generative AI era because, like any good killer app, it takes an activity most people already do all the time — typing search queries into web sites — and provides a substantially, almost magically better experience. This feels similar to electronic spreadsheets conquering paper ledger books or email immediately replacing voice mail and fax. I would estimate that around 90% of the examples I see online right now from people exclaiming over the potential of AI are people conducting smart searches. This behavioral shift is appearing in the data. A recent survey conducted by Future found that 27% of US-based respondents had used AI tools such as ChatGPT instead of a traditional search engine. From an economic perspective, this shift matters. Earlier this month, the stock price for Alphabet, the parent company for Google, fell after an Apple executive revealed that Google searches through the Safari web browser had decreased over the previous two months, likely due to the increased use of AI tools. Keep in mind, web search is a massive business, with Google earning over $175 billion from search ads in 2023 alone. In my opinion, becoming the new Google Search is likely the best bet for a company like OpenAI to achieve profitability, even if it’s not as sexy as creating AGI or automating all of knowledge work (more on these applications later). The other major success story for generative AI at the moment is computer programming. Individuals with only rudimentary knowledge of programming languages can now produce usable prototypes of simple applications using tools like ChatGPT, and somewhat more advanced projects with AI-enhanced agent-style helpers like Roo Code. This can be really useful for quickly creating tools for personal use or seeking to create a proof-of-concept for a future product. The tech incubator Y Combinator, for example, made waves when they reported that a quarter of the start-ups in their Winter 2025 batch generated 95% or more of their product’s codebases using AI. How far can this automated coding take us? An academic computer scientist named Judah Diament recently went viral for noting that the ability for novice users to create simple applications isn’t new. There have been systems dedicated to this purpose for over four decades, from HyperCard to VisualBasic to Flash. As he elaborates: “And, of course, they all broke down when anything slightly complicated or unusual needs to be done (as required by every real, financially viable software product or service).” This observation created major backlash — as does most expressions of AI skepticism these days — but Diament isn’t wrong. Despite recent hyperbolic statements by tech leaders, many professional programmers aren’t particularly worried that their jobs can be replicated by language model queries, as so much of what they do is experience-based architecture design and debugging, which are unrelated skills for which we currently have no viable AI solution. Software developers do, however, use AI heavily: not to produce their code from scratch, but instead as helper utilities. Tools like GitHub’s Copilot are integrated directly into the environments in which these developers already work, and make it much simpler to look up obscure library or AI calls, or spit out tedious boilerplate code. The productivity gains here are notable. Programming without help from AI is rapidly becoming increasingly rare. The Next Big AI Application Language model-based AI systems can respond to prompts in pretty amazing ways. But if we focus only on outputs, we underestimate another major source of these models’ value: their ability to understand human language. This so-called natural language processing ability is poised to transform how we use software. There is a push at the moment, for example, led by Microsoft and its Copilot product (not to be confused with GitHub Copilot), to use AI models to provide natural language interfaces to popular software. Instead of learning complicated sequences of clicks and settings to accomplish a task in these programs, you’ll be able to simply ask for what you need; e.g., “Hey Copilot, can you remove all rows from this spreadsheet where the dollar amount in column C is less than $10 dollars then sort everything that remains by the names in Column A? Also, the font is too small, make it somewhat larger.” Enabling novice users to access to expert-level features in existing software will aggregate into huge productivity gains. As a bonus, the models required to understand these commands don’t have to be nearly as massive and complicated as the current cutting-edge models that the big AI companies use to show off their technology. Indeed, they might be small enough to run locally on devices, making them vastly cheaper and more efficient to operate. Don’t sleep on this use case. Like smart search, it’s also not as sexy as AGI or full automation, but I’m increasingly convinced that within the next half-decade or so, informally-articulated commands are going to emerge as one of the dominate interfaces to the world of computation. What About Agents? One of the more attention-catching storylines surrounding AI at the moment is the imminent arrival of so-called agents which will automate more and more of our daily work, especially in the knowledge sectors once believed to be immune from machine encroachment. Recent reports imply that agents are a major part of OpenAI’s revenue strategy for the near future. The company imagines business customers paying up to $20,000 a month for access to specialized bots that can perform key professional tasks. It’s the projection of this trend that led Elon Musk to recently quip: “If you want to do a job that’s kinda like a hobby, you can do a job. But otherwise, AI and the robots will provide any goods and services that you want.” But progress in creating these agents has recently slowed. To understand why requires a brief snapshot of the current state of generative AI technology… Not long ago, there was a belief in so-called scaling laws that argued, roughly speaking, that as you continued to increase the size of language models, their abilities would continue to rapidly increase. For a while this proved true: GPT-2 was much better than the original GPT, GPT-3 was much better than GPT-2, and GPT-4 was a big improvement on GPT-3. The hope was that by continuing to scale these models, you’d eventually get to a system so smart and capable that it would achieve something like AGI, and could be used as the foundation for software agents to automate basically any conceivable task. More recently, however, these scaling laws have begun to falter. Companies continue to invest massive amounts of capital in building bigger models, trained on ever-more GPUs crunching ever-larger data sets, but the performance of these models stopped leaping forward as much as they had in the past. This is why the long-anticipated GPT-5 has not yet been released, and why, just last week, Meta announced they were delaying the release of their newest, biggest model, as its capabilities were deemed insufficiently better than its predecessor. In response to the collapse of the scaling laws, the industry has increasingly turned its attention in another direction: tuning existing models using reinforcement learning. Say, for example, you want to make a model that is particularly good at math. You pay a bunch of math PhDs $100 an hour to come up with a lot of math problems with step-by-step solutions. You then take an existing model, like GPT-4, and feed it these problems one-by-one, using reinforcement learning techniques to tell it exactly where it’s getting certain steps in its answers right or wrong. Over time, this tuned model will get better at solving this specific type of problem. This technique is why OpenAI is now releasing multiple, confusingly-named models, each seemingly optimized for different specialties. These are the result of distinct tunings. They would have preferred, of course, to simply produce a GPT-5 model that could do well on all of these tasks, but that hasn’t worked out as they hoped. This tuning approach will continue to develop interesting tools, but it will be much more piecemeal and hit-or-miss than what was anticipated when we still believed in scaling laws. Part of the difficulty is that this approach depends on finding the right data for each task you want to tackle. Certain problems, like math, computer programming, and logical reasoning, are well-suited for tuning as they can be described by pairs of prompts and correct answers. But this is not the case for many other business activities, which can be esoteric and bespoke to a given context. This means many useful activities will remain un-automatable by language model agents into the foreseeable future. I once said that the real Turing Test for our current age is an AI system that can successfully empty my email inbox, a goal that requires the mastery of any number of complicated tasks. Unfortunately for all of us, this is not a test we’re poised to see passed any time soon. Are AGI and Superintelligence Imminent? The Free Press recently published an article titled “AI Will Change What it Means to Be Human. Are We Ready?”. It summarized a common sentiment that has been feverishly promoted by Silicon Valley in recent years: that AI is on the cusp of changing everything in unfathomably disruptive ways. As the article argues: OpenAI CEO Sam Altman asserted in a recent talk that GPT-5 will be smarter than all of us. Anthropic CEO Dario Amodei described the powerful AI systems to come as “a country of geniuses in a data center.” These are not radical predictions. They are nearly here. But here’s the thing: these are radical predictions. Many companies tried to build the equivalent of the proposed GPT-5 and found that continuing to scale up the size of their models isn’t yielding the desired results. As described above, they’re left tuning the models they already have for specific tasks that are well-described by synthetic data sets. This can produce cool demos and products, but it’s not a route to a singular “genius” system that’s smarter than humans in some general sense. Indeed, if you look closer at the rhetoric of the AI prophets in recent months, you’ll see a creeping awareness that, in a post-scaling law world, they no longer have a convincing story for how their predictions will manifest. A recent Nick Bostrom video, for example, which (true to character) predicts Superintelligence might happen in less than two years (!), adds the caveat that this outcome will require key “unlocks” from the industry, which is code for we don’t know how to build systems that achieve this goal, but, hey, maybe someone will figure it out! (The AI centrist Gary Marcus subsequently mocked Bostrom by tweeting: “for all we know, we could be just one unlock and 3-6 weeks away from levitation, interstellar travel, immortality, or room temperature superconductors, or perhaps even all four!”) Similarly, if you look closer at AI 2027, the splashy new doomsday manifesto which argues that AI might eliminate humanity as early as 2030, you won’t find a specific account of what type of system might be capable of such feats of tyrannical brilliance. The authors instead sidestep the issue by claiming that within the next year or so, the language models we’re tuning to solve computer programming tasks will somehow come up with, on their own, code that implements breakthrough new AI technology that mere humans cannot understand. This is an incredible claim. (What sort of synthetic data set do they imagine being able to train a language model to crack the secrets of human-level intelligence?) It’s the technological equivalent of looking at the Wright Brother’s Flyer in 1903 and thinking, “well, if they could figure this out so quickly, we should have interstellar travel cracked by the end of the decade.” The current energized narratives around AGI and Superintelligence seem to be fueled by a convergence of three factors: (1) the fact that scaling laws did apply for the first few generations of language models, making it easy and logical to imagine them continuing to apply up the exponential curve of capabilities in the years ahead; (2) demos of models tuned to do well on specific written tests, which we tend to intuitively associate with intelligence; and (3) tech leaders pounding furiously on the drums of sensationalism, knowing they’re rarely held to account on their predictions. But here’s the reality: We are not currently on a trajectory to genius systems. We might figure this out in the future, but the “unlocks” required will be sufficiently numerous and slow to master that we’ll likely have plenty of clear signals and warning along the way. So, we’re not out of the woods on these issues, but at the same time, humanity is not going to be eliminated by the machines in 2030 either. In the meantime, the breakthroughs that are happening, especially in the world of work, should be both exciting and worrisome enough on their own for now. Let’s grapple with those first. #### For more of my thoughts on AI, check out my New Yorker archive and my podcast (in recent months, I often discuss AI in the third act of the show). For more on my thoughts on technology and work more generally, check out my recent books on the topic: Slow Productivity, A World Without Email, and Deep Work. The post AI and Work (Some Predictions) appeared first on Cal Newport. View the full article
-
AI and Work (Some Predictions)
One of the main topics of this newsletter is the quest to cultivate sustainable and meaningful work in a digital age. Given this objective, it’s hard to avoid confronting the furiously disruptive potentials of AI. I’ve been spending a lot time in recent years, in my roles as a digital theorist and technology journalist, researching and writing about this topic, so it occurred to me that it might be useful to capture in one place all of my current thoughts about the intersection of AI and work. The obvious caveat applies: these predictions will shift — perhaps even substantially — as this inherently unpredictable sector continues to evolve. But here’s my current best stab at what’s going on now, what’s coming soon, and what’s likely just hype. Let’s get to it… Where AI Is Already Making a Splash When generative AI made its show-stopping debut a few years ago, the smart money was on text production becoming the first killer app. For example, business users, it was thought, would soon outsource much of the tedious communication that makes up their day — meeting summaries, email, reports — into AI tools. A fair amount of this is happening, especially when it comes to lengthy utilitarian communication where the quality doesn’t matter much. I recently attended a men’s retreat, for example, and it was clear that the organizer had used ChatGPT to create the final email summarizing the weekend schedule. And why not? It got the job done and saved some time. It’s becoming increasingly clear, however, that for most people the act of writing in their daily lives isn’t a major problem that needs to be solved, which is capping the predicted ubiquity of this use case. (A survey of internet users found that only around 5.4% had used ChatGPT to help write emails and letters. And this includes the many who maybe experimented with this capability once or twice before moving on.) The application that has instead leaped ahead to become the most exciting and popular use of these tools is smart search. If you have a question, instead of turning to Google you can query a new version of ChatGPT or Claude. These models can search the web to gather information, but unlike a traditional search engine, they can also process the information they find and summarize for you only what you care about. Want the information presented in a particular format, like a spreadsheet or a chart? A high-end model like GPT-4o can do this for you as well, saving even more extra steps. Smart search has become the first killer app of the generative AI era because, like any good killer app, it takes an activity most people already do all the time — typing search queries into web sites — and provides a substantially, almost magically better experience. This feels similar to electronic spreadsheets conquering paper ledger books or email immediately replacing voice mail and fax. I would estimate that around 90% of the examples I see online right now from people exclaiming over the potential of AI are people conducting smart searches. This behavioral shift is appearing in the data. A recent survey conducted by Future found that 27% of US-based respondents had used AI tools such as ChatGPT instead of a traditional search engine. From an economic perspective, this shift matters. Earlier this month, the stock price for Alphabet, the parent company for Google, fell after an Apple executive revealed that Google searches through the Safari web browser had decreased over the previous two months, likely due to the increased use of AI tools. Keep in mind, web search is a massive business, with Google earning over $175 billion from search ads in 2023 alone. In my opinion, becoming the new Google Search is likely the best bet for a company like OpenAI to achieve profitability, even if it’s not as sexy as creating AGI or automating all of knowledge work (more on these applications later). The other major success story for generative AI at the moment is computer programming. Individuals with only rudimentary knowledge of programming languages can now produce usable prototypes of simple applications using tools like ChatGPT, and somewhat more advanced projects with AI-enhanced agent-style helpers like Roo Code. This can be really useful for quickly creating tools for personal use or seeking to create a proof-of-concept for a future product. The tech incubator Y Combinator, for example, made waves when they reported that a quarter of the start-ups in their Winter 2025 batch generated 95% or more of their product’s codebases using AI. How far can this automated coding take us? An academic computer scientist named Judah Diament recently went viral for noting that the ability for novice users to create simple applications isn’t new. There have been systems dedicated to this purpose for over four decades, from HyperCard to VisualBasic to Flash. As he elaborates: “And, of course, they all broke down when anything slightly complicated or unusual needs to be done (as required by every real, financially viable software product or service).” This observation created major backlash — as does most expressions of AI skepticism these days — but Diament isn’t wrong. Despite recent hyperbolic statements by tech leaders, many professional programmers aren’t particularly worried that their jobs can be replicated by language model queries, as so much of what they do is experience-based architecture design and debugging, which are unrelated skills for which we currently have no viable AI solution. Software developers do, however, use AI heavily: not to produce their code from scratch, but instead as helper utilities. Tools like GitHub’s Copilot are integrated directly into the environments in which these developers already work, and make it much simpler to look up obscure library or AI calls, or spit out tedious boilerplate code. The productivity gains here are notable. Programming without help from AI is rapidly becoming increasingly rare. The Next Big AI Application Language model-based AI systems can respond to prompts in pretty amazing ways. But if we focus only on outputs, we underestimate another major source of these models’ value: their ability to understand human language. This so-called natural language processing ability is poised to transform how we use software. There is a push at the moment, for example, led by Microsoft and its Copilot product (not to be confused with GitHub Copilot), to use AI models to provide natural language interfaces to popular software. Instead of learning complicated sequences of clicks and settings to accomplish a task in these programs, you’ll be able to simply ask for what you need; e.g., “Hey Copilot, can you remove all rows from this spreadsheet where the dollar amount in column C is less than $10 dollars then sort everything that remains by the names in Column A? Also, the font is too small, make it somewhat larger.” Enabling novice users to access to expert-level features in existing software will aggregate into huge productivity gains. As a bonus, the models required to understand these commands don’t have to be nearly as massive and complicated as the current cutting-edge models that the big AI companies use to show off their technology. Indeed, they might be small enough to run locally on devices, making them vastly cheaper and more efficient to operate. Don’t sleep on this use case. Like smart search, it’s also not as sexy as AGI or full automation, but I’m increasingly convinced that within the next half-decade or so, informally-articulated commands are going to emerge as one of the dominate interfaces to the world of computation. What About Agents? One of the more attention-catching storylines surrounding AI at the moment is the imminent arrival of so-called agents which will automate more and more of our daily work, especially in the knowledge sectors once believed to be immune from machine encroachment. Recent reports imply that agents are a major part of OpenAI’s revenue strategy for the near future. The company imagines business customers paying up to $20,000 a month for access to specialized bots that can perform key professional tasks. It’s the projection of this trend that led Elon Musk to recently quip: “If you want to do a job that’s kinda like a hobby, you can do a job. But otherwise, AI and the robots will provide any goods and services that you want.” But progress in creating these agents has recently slowed. To understand why requires a brief snapshot of the current state of generative AI technology… Not long ago, there was a belief in so-called scaling laws that argued, roughly speaking, that as you continued to increase the size of language models, their abilities would continue to rapidly increase. For a while this proved true: GPT-2 was much better than the original GPT, GPT-3 was much better than GPT-2, and GPT-4 was a big improvement on GPT-3. The hope was that by continuing to scale these models, you’d eventually get to a system so smart and capable that it would achieve something like AGI, and could be used as the foundation for software agents to automate basically any conceivable task. More recently, however, these scaling laws have begun to falter. Companies continue to invest massive amounts of capital in building bigger models, trained on ever-more GPUs crunching ever-larger data sets, but the performance of these models stopped leaping forward as much as they had in the past. This is why the long-anticipated GPT-5 has not yet been released, and why, just last week, Meta announced they were delaying the release of their newest, biggest model, as its capabilities were deemed insufficiently better than its predecessor. In response to the collapse of the scaling laws, the industry has increasingly turned its attention in another direction: tuning existing models using reinforcement learning. Say, for example, you want to make a model that is particularly good at math. You pay a bunch of math PhDs $100 an hour to come up with a lot of math problems with step-by-step solutions. You then take an existing model, like GPT-4, and feed it these problems one-by-one, using reinforcement learning techniques to tell it exactly where it’s getting certain steps in its answers right or wrong. Over time, this tuned model will get better at solving this specific type of problem. This technique is why OpenAI is now releasing multiple, confusingly-named models, each seemingly optimized for different specialties. These are the result of distinct tunings. They would have preferred, of course, to simply produce a GPT-5 model that could do well on all of these tasks, but that hasn’t worked out as they hoped. This tuning approach will continue to develop interesting tools, but it will be much more piecemeal and hit-or-miss than what was anticipated when we still believed in scaling laws. Part of the difficulty is that this approach depends on finding the right data for each task you want to tackle. Certain problems, like math, computer programming, and logical reasoning, are well-suited for tuning as they can be described by pairs of prompts and correct answers. But this is not the case for many other business activities, which can be esoteric and bespoke to a given context. This means many useful activities will remain un-automatable by language model agents into the foreseeable future. I once said that the real Turing Test for our current age is an AI system that can successfully empty my email inbox, a goal that requires the mastery of any number of complicated tasks. Unfortunately for all of us, this is not a test we’re poised to see passed any time soon. Are AGI and Superintelligence Imminent? The Free Press recently published an article titled “AI Will Change What it Means to Be Human. Are We Ready?”. It summarized a common sentiment that has been feverishly promoted by Silicon Valley in recent years: that AI is on the cusp of changing everything in unfathomably disruptive ways. As the article argues: OpenAI CEO Sam Altman asserted in a recent talk that GPT-5 will be smarter than all of us. Anthropic CEO Dario Amodei described the powerful AI systems to come as “a country of geniuses in a data center.” These are not radical predictions. They are nearly here. But here’s the thing: these are radical predictions. Many companies tried to build the equivalent of the proposed GPT-5 and found that continuing to scale up the size of their models isn’t yielding the desired results. As described above, they’re left tuning the models they already have for specific tasks that are well-described by synthetic data sets. This can produce cool demos and products, but it’s not a route to a singular “genius” system that’s smarter than humans in some general sense. Indeed, if you look closer at the rhetoric of the AI prophets in recent months, you’ll see a creeping awareness that, in a post-scaling law world, they no longer have a convincing story for how their predictions will manifest. A recent Nick Bostrom video, for example, which (true to character) predicts Superintelligence might happen in less than two years (!), adds the caveat that this outcome will require key “unlocks” from the industry, which is code for we don’t know how to build systems that achieve this goal, but, hey, maybe someone will figure it out! (The AI centrist Gary Marcus subsequently mocked Bostrom by tweeting: “for all we know, we could be just one unlock and 3-6 weeks away from levitation, interstellar travel, immortality, or room temperature superconductors, or perhaps even all four!”) Similarly, if you look closer at AI 2027, the splashy new doomsday manifesto which argues that AI might eliminate humanity as early as 2030, you won’t find a specific account of what type of system might be capable of such feats of tyrannical brilliance. The authors instead sidestep the issue by claiming that within the next year or so, the language models we’re tuning to solve computer programming tasks will somehow come up with, on their own, code that implements breakthrough new AI technology that mere humans cannot understand. This is an incredible claim. (What sort of synthetic data set do they imagine being able to train a language model to crack the secrets of human-level intelligence?) It’s the technological equivalent of looking at the Wright Brother’s Flyer in 1903 and thinking, “well, if they could figure this out so quickly, we should have interstellar travel cracked by the end of the decade.” The current energized narratives around AGI and Superintelligence seem to be fueled by a convergence of three factors: (1) the fact that scaling laws did apply for the first few generations of language models, making it easy and logical to imagine them continuing to apply up the exponential curve of capabilities in the years ahead; (2) demos of models tuned to do well on specific written tests, which we tend to intuitively associate with intelligence; and (3) tech leaders pounding furiously on the drums of sensationalism, knowing they’re rarely held to account on their predictions. But here’s the reality: We are not currently on a trajectory to genius systems. We might figure this out in the future, but the “unlocks” required will be sufficiently numerous and slow to master that we’ll likely have plenty of clear signals and warning along the way. So, we’re not out of the woods on these issues, but at the same time, humanity is not going to be eliminated by the machines in 2030 either. In the meantime, the breakthroughs that are happening, especially in the world of work, should be both exciting and worrisome enough on their own for now. Let’s grapple with those first. #### For more of my thoughts on AI, check out my New Yorker archive and my podcast (in recent months, I often discuss AI in the third act of the show). For more on my thoughts on technology and work more generally, check out my recent books on the topic: Slow Productivity, A World Without Email, and Deep Work. The post AI and Work (Some Predictions) appeared first on Cal Newport. View the full article
-
A first step towards rebuilding UK-EU ties
Hard-fought reset lays bare the realities of the post-Brexit relationshipView the full article
-
Here's the Oura Ring Data You Can Access Without a Subscription
We may earn a commission from links on this page. The Oura ring can give you a ton of data on your sleep, health, and fitness, but you need to pay for a $5.99/month subscription to see all of it. So what happens if you get an Oura ring but don’t pony up for the subscription? Here’s the full rundown, with screenshots of exactly what you’ll see—and a little-known way of getting data the app doesn’t show you.. As I explain in my review of the Oura ring 4, you should budget for both the ring and its ongoing subscription if you want to get the data and analysis the Oura ring is famous for offering. The subscription isn’t for premium extras; it’s for the basic functionality of the app, without which there isn’t much point to wearing the ring. But! There is a way to get the data the ring collects, without going through the app. It comes as a spreadsheet download, so it’s not useful for casual “how did I sleep last night?” use, but it's fine if what you really want is to do some data analysis. I’ll explain how to get that below, but first, let’s see what the app looks like sans subscription. The app only shows scores and meditations (mostly) The two screens you get without a subscription. Credit: Beth Skwarecki/Oura Here’s what you’ll see in the Oura app: not much. You’ll get a score for each of Activity, Readiness, and Sleep. Each score is out of 100—higher is better—and comes with a label like “good” or “optimal.” That’s it. No heart rate, HRV, hours of sleep, stress timeline, Advisor chatbot, meal tracking, none of that. Just three somewhat inscrutable numbers. Without raw data, I don’t consider these scores useful at all. (Imagine somebody asking “how did you sleep?” and you answer “82,” like they’re supposed to know what that means.) You do still get an Explore tab, which contains guided meditations and a few Oura tutorials, such as a one-minute video explaining what your readiness score is supposed to mean. Interestingly, after doing a meditation on my no-subscription account, I don’t see my heart rate or HRV during the session, but I do see some text giving my “baseline” for each of those measures. That’s not my most recent night’s reading, though. For that, you need to go download a spreadsheet. For comparison, a sampling of what you get with a subscription. Credit: Beth Skwarecki/Oura You can download your data from Oura’s website, even without a subscriptionAlright, here’s the fun stuff. If you are OK with getting your data in spreadsheet form, you can download it directly from the Oura website. Go to cloud.ouraring.com. If you had a subscription, you’d be able to view some cool graphs on your dashboard, but you don’t, so the page is mostly blank. Tap on My Account and scroll down to Export Data. You’ll see a list of .csv and .json files. The very first one, sleep.csv (I think that link will work for you if you have an Oura account) is probably the most useful. Go ahead and bookmark that link on your phone, and you can check it in lieu of checking the app. A partial view of sleep.csv (there are tons more columns than I can show in one screenshot). Credit: Beth Skwarecki From this spreadsheet, you can see your resting heart rate, HRV, and pretty much everything else that Oura logs every night. Your total sleep duration, for example, is in the 21st column, and it’s recorded in seconds. (28,410 seconds is about 7 hours and 54 minutes, if I’m doing my math right.) Explore the other files to see which ones have the data you’re most interested in. These are the downloads available without a subscription. Credit: Beth Skwarecki/Oura You need to give Oura a payment card to set up your ring in the first placeIf you want nothing to do with the subscription at all, there’s an important caveat: The only way to set up your Oura ring (when it first arrives) is to sign up for a free trial that rolls into a regular subscription. So if you want to avoid giving Oura any payment information at all, you’re out of luck. You can cancel the subscription renewal immediately after signing up, though, which will mean you won’t be charged. View the full article
-
Putin ‘ready to work’ on Ukraine peace but outlines no concessions
Russian leader’s comments follow phone call with US President Donald The PresidentView the full article
-
Mortgage brokers say FHA changes, inventory are key hurdles
Mortgage brokers say hurdles more pressing than a high interest rate environment range from nationwide inventory shortages to property tax increases. View the full article
-
[Newsletter] When Your Layoff Anxiety Won’t Go Away
Hello folks, Remote work can be freeing. But it can also be isolating, uncertain, and messy, because life doesn't get easier just because you're not commuting. This week’s reads are here to help you feel a little more seen and a little less alone in it all. – Vic Our Favorite Articles 💯Remote Work Didn't Kill Office Romance, It Just Looks Different Now (Vice)Turns out Slack crushes are a real thing. This piece explores the new shape of relationships in distributed workplaces. 👉 Read it here. How Do You Talk About Past Jobs You Regret In Interviews? (Hacker News)It happens—you’ve taken a job you wish you hadn’t. This thread offers smart, thoughtful ways to talk about it without spiraling. Keep reading. How Do You Stay Grounded And Avoid Burnout While Living The Digital Nomad Lifestyle? (Reddit)Honest, practical advice from people who've been there. 👉 Check it out. David Zaitz/Getty ImagesWhen Your Layoff Anxiety Won’t Go Away (HBR)You're not imagining it: layoff worry has a way of hanging around. Here's how to name it, and what to do when it lingers. 👉 Learn more. This Week's Sponsor 🙌Online self-paced PMP & CAPM prep that worksBrain Sensei offers self-paced online courses to help you earn your PMP® or CAPM® certification. Learn with engaging, story-based modules and practice with an unlimited exam simulator. PMP certification holders earn a salary that is 23% greater than their uncertified peers. CAPM certification kickstarts your project management career with confidence. Start Learning. Pass with Confidence. Remotive Jobs 💼 Let's get you hired! These teams are hiring now:💻 Engineering 👉 Senior Independent Software Developer at A.Team (Americas, Europe, Israel) 👉 Senior Independent UX/UI Designer at A.Team (Americas, Europe, Israel) 👉 iOS Developer at nooro (USA) 👉 Senior Shopify Developer at Proxify (CET +/- 3 HOURS) 🎨 Design 👉 Web Designer at Contra (Worldwide) 👉 Logo Designer at Contra (Worldwide) 👉 Graphic Designer at Contra (Worldwide) 📈 Marketing 👉 Senior Content Marketer at Animalz (Worldwide) Free Guides & ToolsPublic Job BoardWe curate 2,000 remote jobs so you don't have to! Find your remote job → Exclusive Webinar3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) Register for free → Job Search TipsLooking for a remote job? Here are our tips to help you work remotely Check it out → Join the Remotive newsletter Subscribe to get our latest content by email. Success! Now check your email to confirm your subscription. There was an error submitting your subscription. Please try again. Email address Subscribe Powered by ConvertKit View the full article
-
Nintendo's Making It Easier to Find Lost Switch 2 Joy-Cons
Continuing the steady stream of news about the upcoming Switch 2 console, Nintendo has now revealed that it's improved the console's ability to find lost Joy-Con controllers. The company demonstrated this in a video posted in the Nintendo Today app, which is available on Android and iPhone. How to locate missing Nintendo Switch 2 Joy-ConsTo find your missing Switch 2 Joy-Cons, go to the home screen, open the Controllers menu, and select Search for controllers. Now, select the missing controller to start your search. If you don't have the Nintendo Today app, check out this Bluesky post to see the feature in action. A similar feature was available on the original Nintendo Switch, where you could remotely trigger Joy-Con controllers to rumble, which would hopefully help you find them. Still, now you can go beyond a simple rumble. With the Switch 2, missing Joy-Cons will now vibrate and play a beeping sound, which should make it easier to hear your poor controller's cries for help. This process is quite similar to pinging your iPhone or Apple Watch. Nintendo hasn't yet revealed any speaker features for the Switch 2 Joy-Con, so it's likely the beeping sound is actually just some a retuned rumble. Still, that does paint an impressive picture of what HD Rumble 2 can do. This is not the only quality-of-life improvement Nintendo has confirmed for the Switch 2. Recently, the company revealed a charging limiter and the ability to lock the Switch 2 with a PIN. There's also support for third-party USB-C webcams. Still, not everyone will feel the need to spend more money to get the Switch 2. If you're on the fence, click through to compare the Nintendo Switch 2's specs with the original Switch. And don't forget to check out the list of games coming to the Switch 2, or the new mouse controls feature. View the full article
-
Teams that build winning products use these 5 strategies from the start
Jake Knapp is a designer, investor, and general partner at Character Capital. He has spent the last 25 years helping companies create products that people genuinely love. He helped build Gmail, co-founded Google Meet, and has worked with hundreds of startups, including Blue Bottle Coffee, One Medical, and Slack. What’s the big idea? The foundation of success is shockingly simple, and yet most teams get bogged down for months trying to strategize a new idea. Making your next big project a hit relies on creating a powerful Founding Hypothesis from the get-go. When done right, this method ensures that everyone’s voice gets heard, there is enough clarity to accelerate experimentation, and a smart product gets to stand in a dazzling spotlight. Rather than wasting time, money, and missing opportunities, starting a project thoughtfully allows teams to move confidently and quickly toward solutions. Below, Jake shares five key insights from his new book, Click: How to Make What People Want. Listen to the audio version—read by Jake himself—in the Next Big Idea App. 1. Project beginnings are a hidden goldmine The beginning of a new project is a moment of massive opportunity. With a strong beginning, we define the right strategy, gain confidence, and build momentum. Without a strong beginning, it’s nearly impossible to succeed. Beginnings are crucial, but beginnings are totally overlooked. The world’s most popular approach to starting projects is chaos. Meet, and meet, and meet. Talk, and talk, and talk. Churn out slide decks, documents, and spreadsheets that no one reads. Outlast your opponents in a political cage match. Finally, rely on a hunch and commit to years of work. That’s the old way—and it is bonkers. Doing things the old way, it can take six months or more to develop a strategy. The old way is like assembling IKEA furniture by tossing parts, an Allen wrench, and a dozen squirrels into a broom closet, then hoping for the best. We don’t have to accept the old way. We can redesign how we start projects. We can structure the first hours so that we get the best contribution from every team member, make smart decisions, and find a winning strategy as fast as possible. 2. Most teams skip the basics Teams that build winning products share some fundamental traits. They know their customers, and the problem they can solve for them. They know which approach to take—and why it’s superior to the alternatives. And they know what they’re up against—and how to radically differentiate from the competition. These teams have mastered the basics. When I first began working with startups, I was embarrassed to ask founders basic questions like “Who are your competitors?” or “How will you differentiate?” because I didn’t want to waste their time or appear naive. “Smart, motivated people who respect their colleagues can still struggle to get on the same page.” But once I worked up the courage, I learned that if I asked three co-founders to write down their startup’s target customer, I got three different answers. If I asked a team what differentiated their product from the competition, I would witness a sixty-minute debate. Smart, motivated people who respect their colleagues can still struggle to get on the same page. Mastering the basics might be obvious, but it’s not easy. 3. A clear strategy starts with a clear calendar Business as usual stands in the way of mastering the basics. In the modern workplace, we’re supposed to attend meetings with teammates, managers, business partners, etc. We’re supposed to stay on top of our email and messages. We’re supposed to juggle multiple projects. And, of course, we’re supposed to meet deadlines and deliver results. But if we think we can take on ambitious projects and make them click with customers while bouncing along through business as usual, ricocheting from one context to the next, we’re fooling ourselves. Figuring out a project’s strategy takes intense focus. Choosing the best opportunity among many options takes intense focus. Designing and building a prototype to test our hypothesis? Yup, that, too, requires intense focus. The normal way of working does not allow for intense focus—especially intense focus that is shared by multiple members of a team. The solution is straightforward: make the difficult decision to call a timeout, drop everything—all the constant emails, constant meetings, constant context switching—and come together to think hard, make big decisions, and master the basics. 4. Silence and structure generate the best ideas The group brainstorm is our species’ natural response to collaboration. Gather a bunch of hunter-gatherers from the Ice Age and ask them to build a hut, and you’ll get a group brainstorm. Gather a bunch of Royal Society scientists from 17th-century England and ask them to come up with a business plan, and soon they’ll be shouting ideas and ordering out for pizza and sticky notes. “The normal way of working does not allow for intense focus.“ Group brainstorms are in our DNA. They’re fun—at least, for extroverts. But they don’t work. They produce mediocre ideas. They exclude those uncomfortable in the group, those who don’t excel at verbal sales pitches, and those who do their best thinking in silence. When it’s time to define your strategy, do not brainstorm out loud. Do not have an open-ended discussion. Instead, work alone together. Give each person time to generate proposals in silence, review others’ proposals in silence, and form opinions and vote in silence. 5. Strategy is better understood as a hypothesis Until a solution clicks with customers, “strategy” is just an educated guess. In one way or another, that guess is almost certainly wrong. Maybe we’re differentiating on speed when people care most about simplicity. Maybe we chose the wrong problem or the wrong customer. First guesses might be off by a lot or a little—but they are almost always off. So, instead of writing strategy documents, start with a Founding Hypothesis. A Founding Hypothesis is a simple Mad Libs-style sentence that describes the essential guesses behind every project: “If we solve [problem] for [customer] with [approach], then they will choose it over [competition] because our solution is [differentiation].” “First guesses might be off by a lot or a little—but they are almost always off.” The Founding Hypothesis is simple, and that’s exactly what makes it powerful. Products click when they make a compelling promise. That promise must be simple, or customers won’t pay attention. Best of all, once you’ve written a Founding Hypothesis, there’s no hiding behind slides, charts, and projections. Your educated guess is standing in a dazzling spotlight, and you’ll want to experiment, right away, to find out if the hypothesis is correct. This article originally appeared in Next Big Idea Club magazine and is reprinted with permission. View the full article
-
Automated Google Sheets Sales Pipeline Template With Salesforce Data
Salesforce is one of the most powerful sales tools out there: it’s in the name. But sometimes, you need another way to make that sales data available to other teams. That can be because they don’t have access to Salesforce at all, or they need to process sales data through an intermediary, like a spreadsheet. That usually means manually copying and pasting data from Salesforce to that spreadsheet or spending hours cleaning up a data export. But it doesn’t have to be that way. With Unito’s integration for Salesforce and Google Sheets, you can automatically export Salesforce data to your spreadsheets while keeping everything in sync. That means anything your sales teams do in Salesforce happens in Google Sheets, and vice-versa. Here’s a guide to set up Unito’s Salesforce Google Sheets integration. With this free template, you can have a ready-made spreadsheet built specifically for this export. How the template works Step 1: Click USE TEMPLATE in the corner to create your own copy Step 2: Sign up for a 14-day trial with Unito In order to keep data in sync between Salesforce and Google Sheets, you will need a Unito account. Head to https://unito.io/ to create an account. Step 3: Build a flow with Salesforce and Google Sheets We’ve included steps below to walk you through the process. We recommend you follow the field mappings shown below. Get the Template “Some other tools we looked at were kind of crazy when it came to pricing. Another big thing for us is 2-way sync for our Salesforce instance. Most of those options only offer directional sync and Unito is bidirectional, which is what we really needed. Plus they offered the best pricing for us at this stage.” – Anel Behric, IT Manager, Cloudwerx Read the Case Study Step-by-step instructions for setting a Unito flow This template is pre-formatted to turn Salesforce data into a powerful sales pipeline and reporting tool built right into a spreadsheet. But it works best when you use Unito to feed that data into Google Sheets automatically. Step 1: Connect Salesforce and Google Sheets to Unito and pick your blocks of work Step 2: Set flow direction to one-way, from Salesforce to Google Sheets This will automatically create new rows in Google Sheets any time a new work item (an opportunity, a task, a contact) is created in Salesforce. You can also set this to two-way if you want new Google Sheets rows to create new Salesforce work items. Step 3: Build your rules With rules, you can filter out Salesforce work items you don’t want in your Google Sheets report. You could choose to exclude all Salesforce opportunities tagged with a specific campaign, for example. Step 4: Map your fields If you only want data to sync from Salesforce to Google Sheets, set all your fields to one-way updates. If you want to be able to make changes to Salesforce from Google Sheets, set them up for two-way updates. Step 5: Launch! After mapping your fields, you’re good to go! Now just sit back and watch as Salesforce work items are automatically synced to your report in Sheets. Ready to start? Spend less time on data entry and more on selling Get the template FAQ: Google Sheets sales pipeline template What is a Google Sheets sales pipeline template? A Google Sheets sales pipeline template turns sales leads and deals from other apps into Google Sheets rows. This allows your sales team to represent important deals in a fully customizable Google Sheet to make sales pipeline management and reporting a breeze. Many of these templates will also come with prebuilt charts and graphs that break down sales deals based on which stage they’re in, potential revenue, and other variables. These pre-built pipeline templates allow your sales team to quickly deploy a method of tracking deals without paying for a dedicated tool. Can you use Google Sheets for your sales pipeline? Yes, you can use Google Sheets for your sales pipeline. Since Google Sheets is a flexible, customizable tool, you can build a sales pipeline to your exact specifications. If you don’t have a dedicated sales tool, this allows you to start using a sales pipeline, report on deals more effectively, and build a stronger, data-driven sales strategy. If you have a sales tool like Salesforce or HubSpot, using Google Sheets for your sales pipeline allows you to centralize contact and deal information in a platform everyone in your organization has access to. Why use Google Sheets for your sales pipeline? Google Sheets is one of the most common tools across industries, and its flexibility makes it uniquely suited to all sorts of administrative tasks. If you need a custom tool for your sales pipeline, either because you don’t have a dedicated sales tool or you need a better way to report on sales performance, Google Sheets is a strong choice. You can start with a pre-built template and easily modify it to match your sales process over time. Few tools will allow you to do that. Do you need software integration for a Google Sheets sales pipeline? While you can manually load customer and deal data into your Google Sheets sales pipeline template, it’s far from the most efficient method, especially if you have customer data in tools like Salesforce or Google Contacts. Software integration can automatically pull contact and deal data from the rest of your tool stack, plug it into your sales pipeline template, and clean it automatically so it’s consistent no matter where it’s from. A two-way sync integration like Unito can keep the data in your Google Sheets pipeline up to date by regularly checking other versions in the rest of your tools. View the full article
-
Fed exploring home loan bank collateral interoperability
Federal Reserve Vice Chair Philip Jefferson said the central bank is in the "early stages" of enabling banks to pledge assets to both the Federal Home Loan Bank and discount window liquidity facilities. View the full article
-
Mortgage bonds largely unscathed after Moody's downgrade of US
Mortgage bonds supported by government-backed companies like Fannie Mae and Freddie Mac were trading slightly wider Monday morning after Moody's Ratings downgraded the US late last week.
-
U.S. tech firms earn dismal grades on human rights report card
This story originally appeared in Global Voices. A decade after the first assessment, the 2025 Ranking Digital Rights Index: Big Tech Edition reveals a landscape of paradox. While some of the world’s most influential digital platforms demonstrate incremental improvements in transparency, particularly in governance disclosures from Chinese companies like Alibaba, Baidu, and Tencent, the overall picture suggests a concerning inertia. In a world grappling with rising authoritarianism, the use of AI tools, and ongoing global conflicts, the report shows that many Big Tech companies are largely continuing with “business as usual,” failing to address critical issues. The concentration of power within Big Tech remains a central concern. The report highlights how companies like Alphabet, Amazon, Apple, Meta, and Microsoft have aggressively acquired competitors, consolidating their dominance in the digital landscape. This market concentration, where Alphabet, Meta, and Amazon capture two-thirds of online advertising revenue, grants them power over online access and information flows. Despite increasing scrutiny from legal systems, evidenced by rulings against Google for illegal monopolies in search and advertising, the political influence of Big Tech appears to have increased. The symbolic image of US Big Tech CEOs in the front row of the presidential inauguration underscores their deep connections with government bodies, potentially hindering much-needed oversight at a time when human rights and democratic structures face unprecedented challenges globally. This dominance is further exacerbated in a context of conflict. “Alphabet, Amazon, and Microsoft have all developed tools meant for war and integration with lethal weapons. Their cloud infrastructure has powered military campaigns,” reveals the report. Ranking Digital Rights also calls attention to propaganda, especially on X and platforms owned by Meta. Lack of transparency While the report highlights pockets of progress, particularly among Chinese companies (Alibaba, Tencent, and Baidu), showing increased transparency in governance, patterns have been spotted throughout the analysis that raise concerns. Though Meta has shown improvements in disclosing how its algorithms curate content and has enhanced security with default end-to-end encryption on some messaging services, significant shortcomings persist across the industry. A common issue is the widespread lack of transparency in how companies handle private requests for user data or content restrictions, with Samsung notably disclosing no information in this area. The very engines of Big Tech’s profit—algorithms and targeted advertising—remain largely opaque. Despite the known risks for democracies linked to disinformation and election interference, none of the assessed companies achieved even half the possible score in this area. Alphabet and Meta even showed slight declines in transparency related to their targeted advertising practices. Most companies fail to disclose information about advertisements removed for violating their policies or provide evidence of enforcing their ad targeting rules. X declined significantly more than other companies analyzed. “The company’s transformation from the publicly listed Twitter to the privately held X Corp. and the elimination of its human rights team coincided with a significant drop in transparency across its governance, freedom of expression, and privacy practices,” the report emphasized. X failed to publish a transparency report in both 2022 and 2023. While a report finally surfaced in September 2024, it fell outside the assessment’s cutoff. Even more troubling is the reported removal of years’ worth of transparency reports dating back to 2011. Finally, the report points to a troubling pattern of policy evolution. Companies like Meta and YouTube have been revising their content policies in ways that have sparked widespread concern, such as Meta dismantling its third-party fact-checking program in the US and YouTube removing “gender identity” from its hate speech policy. Global Voices covered the consequences of this policy in Africa, and also how fact-checking practices are needed amidst digital authoritarianism, especially during elections, such as the case of Indonesia. This suggests a potential shift towards justifying existing behaviors rather than upholding previously embraced principles. The 2025 RDR Index demonstrates stagnation at a critical time. While acknowledging some positive developments, the report also calls for a renewed effort from different stakeholders, especially civil society, investors, and policymakers. View the full article
-
Airbnb ordered to block over 65,000 holiday rentals in Spain for rule violations
Spain has ordered Airbnb to block more than 65,000 holiday listings on its platform for having violated rules, the Consumer Rights Ministry said Monday. The ministry said that many of the 65,935 Airbnb listings it had ordered to be withdrawn did not include their license number or specify whether the owner was an individual or a company. Others listed numbers that didn’t match what authorities had, it said. Spain is grappling with a housing affordability crisis that has spurred government action against short-term rental companies. In recent months, tens of thousands of Spaniards have taken to the streets protesting rising housing and rental costs, which many say have been driven up by holiday rentals on platforms like Airbnb that have proliferated in cities like Madrid and Barcelona and many other popular tourist destinations. “Enough already with protecting those who make a business out of the right to housing,” Consumer Minister Pablo Bustinduy told reporters on Monday. Airbnb said that it would appeal the decision. Through a spokesperson, the company said it did not think the ministry was authorized to rule on short-term rentals—and that it had utilized “an indiscriminate methodology” to include Airbnb rentals that do not need a license to operate. Last year, Barcelona announced a plan to close down all of the 10,000 apartments licensed in the city as short-term rentals by 2028 to safeguard the housing supply for full-time residents. The ministry said it had notified Airbnb of the noncompliant listings months ago, but that the company had appealed the move in court. Spain’s government said Madrid’s high court had backed the order sent to Airbnb. Bustinduy said it involved the immediate removal of 5,800 rental listings from the site. Two subsequent orders would be issued until the nearly 66,000 removals are reached, he said. Spain’s government said the first round of affected properties were located across the country, including in the capital, Madrid, as well as in the regions of Andalusia and Catalonia, whose capital is Barcelona. —Suman Naishadham, Associated Press View the full article
-
should I penalize candidates for not sending thank-you notes?
A reader writes: I am interviewing for two positions currently. So far I’ve interviewed six people and not one has sent any kind of follow-up or thank-you note. I can tell from the virtual meeting invite that they all have my email address, so that’s not the reason. I polled some friends and got a split on if these notes are even required nowadays. I know you always suggest writing a strong thank-you note to improve your candidacy, but honestly I’d be thrilled with even a one-line acknowledgement. With the candidates all being comparable, any candidate sending me a note is certainly going to rank higher for me. Am I being old-fashioned with this? I answer this question — and two others — over at Inc. today, where I’m revisiting letters that have been buried in the archives here from years ago (and sometimes updating/expanding my answers to them). You can read it here. Other questions I’m answering there today include: My employee apologizes all the time People incorrectly call me Mr. The post should I penalize candidates for not sending thank-you notes? appeared first on Ask a Manager. View the full article
-
Moody’s throws Trump a curve ball
Credit downgrade is symbolic blow to American prestige and should spur Washington to get its fiscal house in order View the full article
-
Workflow Bottlenecks: How To Identify and Fix Them
Workflow bottlenecks are the areas in a workflow where work moves slower than usual. In this article, we show you how to identify, prevent and address bottlenecks to improve your team's productivity. The post Workflow Bottlenecks: How To Identify and Fix Them appeared first on The Digital Project Manager. View the full article
-
How Private Equity Can Help You Cash in on Your Business Twice
You are probably hearing other business owners refer to the term “the second bite” with a big smile on their faces. The second bite refers to the opportunity that arises after a business owner sells less than 100 percent of their ownership stake. By leaving chips on the table, business owners have the chance to partner with a private equity firm to accelerate growth and business value. This can lead to a second bite which comes during the next sale of the business. At some point, many great businesses reach an inflection point: a time when the owner wants a partner who can help preserve and enrich the future of the business (and do it in the right way). When a business owner is selling and thinking ahead to a second bite (rolling equity and exiting again down the line), the three most important elements to focus on are: 1. Pick your dance partner wisely It is essential to find a partner who is a good fit for your business and its growth. This partner will work with you for the next three to seven years. To accelerate the business’s value, you have to bring in great talent, technology, and tools. The right business partner will help you make sure you invest in the right things at the right time. You have to assess the following: Does your potential partner share and support your vision for the next growth stage? Can they provide operational, financial, or strategic resources to accelerate growth? Do they have a track record of treating team members and founders well? Just like when dating, make sure you learn all you should before you agree to get married. 2. Roll the dice A lot has changed over the past 10 years, and now that capital is more founder-friendly, it means there are more opportunities for an owner to stay involved and “roll the dice.” It provides great outcomes for founders, and investors are more willing to partner with founders and create founder-friendly terms where they work hard on alignment while honoring the founder’s ability to take the next step. Set clear expectations on the equity rollover terms, clarify governance and control rights, i.e., will they remain on the board or continue in an operational role, and define liquidity terms and a clear timeline for how and when to monetize the second bite. 3. Align on growth plans and your participation Sellers and buyers must agree on how value will be created, whether through organic growth, mergers and acquisitions, or operational strategies. Also, the owner’s involvement can directly impact their second bite’s value. So, it’s essential to determine if you as the owner will be involved in day-to-day operations or you’ll be taking on a more strategic role. The second bite is an opportunity we’re seeing more frequently now, and it’s here to stay. It’s an excellent chance for founders to stay involved and participate in that accelerated growth with their new partner. View the full article
-
Mozilla Just Patched Two Firefox Zero-Days Discovered at a Hacking Contest
If you're a Firefox user, you need to update your browser. Mozilla has released a security patch for two zero-day vulnerabilities identified at the recent Pwn2Own hacker contest held in Berlin. Zero-days are critical security flaws that have been actively exploited or publicly disclosed before an official fix is available. Browsers are targets for malware, and Firefox isn't the only browser to discover zero-day exploits in recent days. Earlier this month, Google released an emergency patch for Chrome to address a high-severity vulnerability (CVE-2025-4664) that permitted full account takeover—CISA later confirmed that this flaw was being actively exploited in attacks. (If you're using Chrome, you should consider other privacy-focused browser alternatives anyway.) Zero-days discovered in FirefoxBoth zero-day exploits discovered at Pwn2Own Berlin are out-of-bounds flaws that allow attackers to read or write data, potentially gaining access to sensitive information or permitting code execution. CVE-2025-4918 allows read or write on a JavaScript Promise object (a proxy value for a process that hasn't been completed yet) while CVE-2025-4919 permits read or write on a JavaScript object (a collection of "properties," which are associations between keys and values). CVE-2025-4918 was discovered by Edouard Bochin and Tao Yan from Palo Alto Networks, while CVE-2025-4919 was reported by Manfred Paul—each won $50,000 for their hacks. The following versions of Firefox are vulnerable to these flaws and should be updated: Firefox before 138.0.4 Firefox Extended Support Release (ESR) before 128.10.1 Firefox ESR before 115.23.1 Firefox for Android While Mozilla was quick to address these flaws, the company notes that neither broke out of Firefox's "sandbox," which would be required in order to take control of a target's machine. That's a good sign for Firefox's overall security, as attackers at previous Pwn2Own competitions successfully broke out of the sandbox. Still, Mozilla recommends all users install the new patches as soon as possible. How to update Firefox to the latest versionIf you're a Firefox user, make sure your browser is up to date. You can check which version you're on by going to Firefox > About Firefox. Click the Restart to Update Firefox button if it appears. View the full article