Reading view

There are new articles available, click to refresh the page.

Microsoft CEO Satya Nadella Testifies In OpenAI Trial

The Musk v. Altman trial entered its third week Monday, with Microsoft CEO Satya Nadella and former OpenAI co-founder and renowned AI researcher Ilya Sutskever taking the stand. Nadella testified that Elon Musk never raised concerns to him that Microsoft's investments in OpenAI violated any special commitments, and said he viewed the partnership as clearly commercial from the start. He also described OpenAI's 2023 board crisis as "amateur city." Meanwhile, Sutskever testified that he had raised concerns about Sam Altman because he feared OpenAI could be "destroyed." He expressed concerns about Altman's behavior to the board, in part because he said he felt "a great deal of ownership" over the startup. "I simply cared for it, and I didn't want it to be destroyed," Sutskever said. CNBC reports: Nadella said he was "very proud" that Microsoft took the risk to invest in OpenAI when "no one else was willing" to bet on the fledgling lab. Musk, who testified late last month, said Microsoft's $10 billion investment was the key tipping point that made him believe OpenAI was violating its nonprofit mission. He testified that the scale of the investment bothered him, and it prompted him to open a legal investigation into OpenAI. "I was concerned they were really trying to steal the charity," Musk said from the stand. Nadella said he did not believe Microsoft's investments in OpenAI were donations, and that there was a clear commercial element to their partnership from the outset. He said during the partnership's early years, Microsoft gave OpenAI sharp discounts on computing resources, and Microsoft believed it would reap marketing benefits from doing so. During a separate video deposition that was played on Monday morning, Michael Wetter, a corporate development executive at Microsoft, said the company has recognized approximately $9.5 billion in revenue to date through its partnership with OpenAI as of March 2025. [...] Nadella said he was "pretty surprised" by the board's decision [to fire Altman in November 2023], and that his priority was to try and figure out how to maintain continuity for Microsoft customers. Immediately after Altman was removed, Nadella said he made an effort to learn more about what happened, adding that he suspected jealousy and poor communication was at play. During conversations with OpenAI board members after the firing, Nadella said he was simply trying to understand the language in the OpenAI's statement about Altman being "not consistently candid" while communicating with the board. That language, Nadella said, "just didn't sort of suffice, because this is the CEO of a company that we are invested in and we're deeply partnered with, and so I felt that they could have explained to me what are the incidents or what is the detail behind it." There must have been instances of jealousy or miscommunication that could have justified pushing out Altman, Nadella said. He wanted more depth from the board members after the remark about candor, but no such information was available, he said. "It was sort of amateur city, as far as I'm concerned," Nadella testified. [...] Musk testified that he is not entirely against OpenAI having a for-profit unit, but he said it became "the tail wagging the dog." He repeatedly accused Altman and Brockman of enriching themselves from a charity while also reaping the positive associations that come from running a nonprofit. "Microsoft has their own motivations, and that would be different from the motivations of the charity," Musk said from the stand. "All due respect to Microsoft, do you really want Microsoft controlling digital superintelligence?" During a videotaped deposition shown in court last week, former OpenAI director Tasha McCauley recalled a discussion with Nadella and her fellow board members after the 2023 decision to dismiss Altman as OpenAI's CEO. "To the best of my recollection, Satya wanted to restore things to as they had been," McCauley said. The board members didn't think that was the right move, she said. But as a court witness on Monday, Nadella said he never demanded that the board reinstate Altman as OpenAI CEO. Recap: Sam Altman Had a Bad Day In Court (Day Eight) Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Sam Altman Had a Bad Day In Court

An anonymous reader quotes a report from Business Insider: As the trial between Elon Musk and OpenAI ended its second week, the Tesla CEO started scoring points against Sam Altman. His witnesses landed three solid punches in testimony about how Altman runs OpenAI as CEO, raising concerns about his dedication to AI safety, the nonprofit's mission, and his honesty as a leader of the organization. [...] This week, Musk's legal team called a parade of witnesses who questioned whether Altman was acting in the interest of the nonprofit. On Thursday, that included a former OpenAI safety researcher, who described a slow erosion of the company's safety teams, which prompted her to leave the company. Witnesses also shared stories about the company launching products without the proper safety reviews -- or the knowledge of the board. Rosie Campbell, a former AI safety researcher at OpenAI, testified that the company became more product-focused during her time there and moved away from the long-term safety work that had initially drawn her in. She said both long-term AI safety teams were eventually eliminated, and that she supported Altman's reinstatement only because she feared OpenAI might otherwise collapse into Microsoft: "It was my understanding at the time that the best way for OpenAI to not disintegrate and fall about would be for Sam to return." Still, Campbell's testimony wasn't entirely favorable to Musk. She also said xAI, Musk's AI company, likely had an inferior approach to safety than OpenAI. Helen Toner, another former OpenAI board member, also testified about the board's concerns leading up to Altman's removal. She said the board was not primarily worried about ChatGPT's safety, but about Altman's leadership and investor relationships, saying, "The issues that we were concerned about in our decision to fire Sam were exacerbated by relationships with investors." Toner also described concerns that Altman was misrepresenting what others had said, telling the court, "We were concerned that Sam was inserting words into other people's mouths in order to get people to do what he wanted." Meanwhile, Tasha McCauley, a former OpenAI board member, described a deep loss of trust in Altman and accused him of creating "chaos" and "crisis" inside the company. She said Altman fostered a "culture of lying and culture of deceit," including allegedly misleading others about whether GPT-4 Turbo needed internal safety review before launch. Musk's lawyers then called to the stand David Schizer, a Columbia Law professor and nonprofit-governance expert, who framed Altman's alleged behavior as a serious governance problem for an organization that was supposed to be mission-driven. Asked about claims that products were launched without full board awareness or safety review, he said, "The board and CEO need to be partnering, working together, to make sure the mission is being followed," adding that "if the CEO is withholding that information, it's a big problem." The day ended with the start of a Microsoft executive's deposition. Microsoft VP Michael Wetter said Azure had integrated OpenAI technology, that Microsoft saw strategic value in having AI developers build on Azure, and that a 2016 agreement allowed OpenAI to use Microsoft tools for free even though it could mean a loss of up to $15 million for Microsoft. Testimony ended early, with no court on Friday and the trial set to resume Monday. Recap: Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial

Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear. "My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up." Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?" In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner. Recap: Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla

An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015. In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies. Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company. Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars." CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member. Recap: OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri

Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance." Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.

Read more of this story at Slashdot.

OpenAI President Discloses His Stake In the Company Is Worth $30 Billion

OpenAI president Greg Brockman's testimony dominated the fifth day of the trial for Elon Musk's lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become "the most hated men in America." From a report: Brockman's disclosure would put him on the Forbes list of the world's richest people, with wealth comparable to Melinda French Gates. [...] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing -- which did not include the actual text exchange -- Musk sent a message to Brockman to gauge interest in settlement. When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence. Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI's charity but never did. In explaining the delay, Brockman put the onus on Altman: "I asked Sam when I should donate this, and he said he would let me know," reports Business Insider. The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. "The most memorable part of Russell's testimony was when he talked about how much Musk's legal team paid him," notes Business Insider. "He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour." Recap: Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Musk Concludes Testimony At OpenAI Trial

An anonymous reader quotes a report from CNBC: Elon Musk wrapped up his testimony on Thursday as the trial in his lawsuit against OpenAI CEO Sam Altman continued into its fourth day. OpenAI's attorney, William Savitt, cross-examined Musk in the morning. He asked Musk about the capped nature of Microsoft's investments in OpenAI, his involvement in negotiations about the company's structure, and whether he knew about the OpenAI nonprofit's recent initiatives. "I don't know what's going on at OpenAI," Musk testified. Savitt also asked Musk about his competing artificial intelligence startup, xAI. While not the main focus of the case, Musk said it is "partly" true that xAI used some of OpenAI's models to train its own models, a process known as distilling. Musk also suggested that xAI has used OpenAI's technology to help build the company. Musk sued OpenAI, Altman, and Greg Brockman, the company's president, in 2024, alleging that they went back on their commitments to keep the artificial intelligence company a nonprofit and to follow its charitable mission. He claims that the roughly $38 million he donated to seed OpenAI, a company he co-founded, was used for unauthorized commercial purposes. Once Musk wrapped up his testimony after roughly two hours of questioning on Thursday, his attorneys called Jared Birchall, who manages Musk's billions at his family office, as their next witness. Birchall testified about his knowledge of Musk's specific donations to OpenAI. Judge Yvonne Gonzalez Rogers oversaw the proceedings from federal court in Oakland, California. The trial will resume on Monday. Recap: Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney

An anonymous reader quotes a report from the San Francisco Chronicle: Elon Musk returned to the witness stand Wednesday in Oakland federal court for a second day of testimony in his case against OpenAI, detailing his shift from being an enthusiastic supporter of the nonprofit to feeling betrayed. He also clashed repeatedly with OpenAI's attorney over questions that Musk believed were unfair. He said his feelings towards OpenAI CEO Sam Altman and President Greg Brockman shifted from a "phase one" of support, "phase two" of doubts, and finally "phase three, where I'm sure they're looting the nonprofit. We're currently in phase three," Musk said with a chuckle. Musk said he was a "fool" for giving OpenAI "$38 million of essentially free funding to create what would become an $800 billion company," of which he has no equity stake. In his 2024 lawsuit, Musk alleged breach of charitable trust and unjust enrichment, arguing OpenAI abandoned its original nonprofit mission to benefit humanity to pursue financial gain. OpenAI's lawyer William Savitt argued Tuesday during his opening statement that the nonprofit entity remains in control of the for-profit public benefit corporation and is now one of the most well-funded nonprofits in the world. Musk is seeking to oust Altman from OpenAI's board and upwards of $134 billion in damages, which he said would be used to fund OpenAI's nonprofit mission. During cross-examination, Savitt clashed with Musk over questioning. Savitt asked whether Musk had contributed $38 million to OpenAI, rather than the $100 million that he later claimed to have invested on X. Musk said he also contributed his reputation to the company and came up with the idea for the name, leading Savitt to ask Musk to respond yes or no to "simple" questions. "Your questions are not simple. They're designed to trick me, essentially," Musk said, adding that he had to elaborate or it would mislead the jury. He compared Savitt's questions to asking, "have you stopped beating your wife?" Judge Yvonne Gonzalez Rogers intervened, leading Musk to answer yes to the $38 million investment amount. The world's richest man said his doubts grew and by late 2022, he thought "wait a second, these guys are betraying their promise. They're breaking the deal." "I started to lose confidence that they were telling me the truth," Musk said. A turning point was co-defendent Microsoft's investment of billions of dollars into OpenAI, Musk said. On October 23, 2022, Musk texted Altman that he was "disturbed" to see OpenAI's valuation of $20 billion in the wake of the Microsoft deal. Musk called the deal a "bait and switch," since a nonprofit doesn't have a valuation. OpenAI had "for all intents and purposes" become primarily a for-profit company, Musk argued. Altman responded to Musk by text that "I agree this feels bad," saying that OpenAI had previously offered equity in the company but Musk hadn't wanted it at the time. Altman said the company was happy to offer equity in the future. Musk said it "didn't seem to make sense to me" to hold equity in what should be a nonprofit. Musk also testified about former OpenAI board member Shivon Zilis, who lives with him, is the mother of four of his children, and served as a senior advisor at Neuralink. He denied that she shared sensitive OpenAI information with him. Court evidence showed Musk had encouraged her to stay close to OpenAI to "keep info flowing" and had approved Neuralink recruiting OpenAI employees, which he defended by saying workers are free to change jobs. "It's a free country," Musk said. Recap: Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

New Sam Bankman-Fried Trial Would Be Huge Waste of Court's Time, Judge Says

A federal judge denied Sam Bankman-Fried's request for a new trial, calling his claims of DOJ witness intimidation "wildly conspiratorial" and unsupported by the record. Judge Lewis Kaplan said (PDF) the FTX founder's motion appeared tied to a pre-indictment plan to recast himself as a Republican victim of Biden's DOJ in hopes of gaining sympathy, leniency, or even a Trump pardon. Ars Technica reports: Bankman-Fried was sentenced to 25 years in prison in 2024 for "masterminding one of the largest financial frauds in American history," US District Judge Lewis Kaplan wrote in his order. He was convicted on all charges, including wire fraud, conspiracy to commit securities fraud, commodities fraud, and money laundering. There is already an appeal pending in another court, the judge noted. But Bankman-Fried filed a separate motion for a new trial, claiming that there were "newly discovered" witnesses and evidence that might have helped his defense, if Joe Biden's Department of Justice hadn't intimidated them into refusing to testify or, in one case, lying on the stand. He also asked for a new judge, wanting Kaplan to recuse himself. However, Kaplan pointed out that "none of the witnesses" were "newly discovered." And more concerningly, Bankman-Fried offered no evidence that the witnesses could prove the "wildly conspiratorial" theory the FTX founder raised, claiming that their absence at the trial was a "product of government threats and retaliation," the judge wrote. Bankman-Fried's theory is "entirely contradicted by the record," Kaplan said. He emphasized that granting Bankman-Fried's request "would be a large waste of judicial resources as it could require another judge to familiarize himself or herself with an extensive and complicated record." Additionally, all three witnesses that Bankman-Fried claimed could give crucial testimony in his defense were known to him throughout the trial, and he never sought to compel their testimony. And the "self-serving social-media posts" of one witness who now claims that he lied when testifying against Bankman-Fried -- "Ryan Salame, who pleaded guilty" -- must be met with "utmost suspicion," Kaplan said. "If one were to take Salame at his current word, he lied under oath when pleading guilty before this Court," Kaplan wrote. Even if taken seriously, "his out-of-court, unsworn statements could not come anywhere close to clearing the bar to warrant a new trial," Kaplan said, deeming Salame's credibility "highly questionable." Further, "even if these individuals had testified for Bankman-Fried, his protestations that one or more of them would have supported his claims that FTX was not insolvent and that his victims all were compensated fully in the bankruptcy proceedings are inaccurate or misleading," Kaplan concluded. In the order, Kaplan's frustration seems palpable, as there may have been no need for him to rule on the motion at all after Bankman-Fried requested to withdraw it. But the judge said the ruling was needed after Bankman-Fried waited to file his withdrawal request until after the DOJ and the court wasted time responding and reviewing filings, the judge said. Troublingly, Bankman-Fried's request to withdraw his request without prejudice would have allowed him to potentially request a new trial after the appeal ended. Based on the substance of the filing, that risked wasting future court resources, Kaplan determined. To prevent overburdening the justice system, Kaplan deemed it necessary to deny Bankman-Fried's motion and request for recusal, rather than allow him to withdraw the filing without prejudice.

Read more of this story at Slashdot.

Musk Testifies OpenAI Was Created As Nonprofit To Counter Google

Elon Musk testified on day two of his trial against OpenAI, saying he helped create the company as a nonprofit counterweight to Google and would not have backed it if the goal had been private profit. CNBC reports: Musk on Tuesday was the first witness called to testify in the trial. He spoke about his upbringing, his many companies, his role in founding OpenAI and his understanding of its structure. Musk said in his testimony that he was not opposed to the creation of a small for-profit subsidiary, "as long as the tail didn't wag the dog." Musk said he was motivated to start OpenAI to serve as a counterweight to Google. He got the idea after an argument he had with Google co-founder Larry Page, who called Musk a "speciesist for being pro-human," he testified. "I could have started it as a for profit and I chose not to," Musk said on the stand. Earlier, attorneys for Musk and OpenAI presented their opening arguments to the jury. Musk's lead trial lawyer, Steven Molo, delivered the opening statement for the Tesla and SpaceX CEO. OpenAI lawyer William Savitt gave the opening statement for the AI company, Altman and Brockman. OpenAI has characterized Musk's lawsuit as a baseless "harassment campaign." The company said Monday in a post on X that it "can't wait to make our case in court where both the truth and the law are on our side." During his testimony on Tuesday, Musk repeatedly emphasized that he founded OpenAI to serve as a counterweight to Google. He said he got the idea after an argument about AI safety with Google co-founder Larry Page, who Musk said called him "a speciesist for being pro-human." Musk said he was concerned Page was not taking AI safety seriously, so he wanted there to be an nonprofit, open source alternative to Google. "I could have started it as a for profit and I chose not to," Musk said on the stand. Further reading: Elon Musk and OpenAI CEO Sam Altman Head To Court

Read more of this story at Slashdot.

Supreme Court Hears Case On How To Label Risks of Popular Weed Killer

An anonymous reader quotes a report from NPR: A divided U.S. Supreme Court on Monday heard a dispute over labels on the popular Roundup weed killer, which thousands of people blame for their cancers. How the Supreme Court rules could have implications for tens of thousands of lawsuits against Roundup maker Monsanto, which is now owned by Bayer. The case centers on who decides about warning labels on chemicals: the federal government -- or states or juries. [...] The justices will not be evaluating whether glyphosate causes cancer. Rather, they'll consider who should decide what appears on warning labels and whether states have a role to play after the EPA weighs in. The current U.S. solicitor general backed Monsanto. Sarah Harris, his principal deputy, said the Environmental Protection Agency is in the driver's seat, not anyone in Missouri. "Missouri thus requires adding cancer warnings but federal law requires EPA to approve new warnings and tasks EPA with deciding what label changes would mitigate any health risks," Harris argued. "State law must give way." Several justices, including Brett Kavanaugh, appeared to agree with Monsanto's argument about the need for a single, uniform standard across the country. But others, like Chief Justice John Roberts, wondered what would happen if the federal government moved more slowly than states did, who wanted to act quickly on information about new dangers. "Well, it does undermine the uniformity," Roberts said. "On the other hand, if it turns out they were right, it might have been good if they had an opportunity to do something, to call this danger to the attention of people while the federal government was going through its process," he said about states. Justice Ketanji Brown Jackson asked about the emergence of new science, and the EPA's reviews. "There's a 15-year window between when that product has to be re-registered again and lots of things can happen in science, in terms of development about the product," she said. Bayer, which now owns Monsanto, only sells Roundup that contains glyphosate to farmers and businesses these days. Bayer has been pushing to resolve scores of the residential cases through a sweeping settlement, trying to put the costly claims behind it.

Read more of this story at Slashdot.

Supreme Court justices skeptically question both sides in geofence surveillance case

Supreme Court justices lobbed sharp questions at both sides about the constitutionality of geofence warrants during oral arguments Monday in a case that could have broader implications for law enforcement collection of Americans’ data.

Chatrie v. The United States stems from the 2019 conviction of Okello Chatrie in a bank robbery, where authorities obtained location data from Google about people within a specific area at a specific time.

In questioning an attorney for the petitioner, Adam Unikowsky, a number of conservative justices — including Chief Justice John Roberts — asked why the government shouldn’t be allowed to access location data taken from a third party given that Chatrie had “opted-in” to share that data.

“I just don’t agree that one should have to flip off one’s location history as well as other cloud services to avoid government surveillance,” Unikowsky answered, raising whether the government was entitled to getting emails or calendar data that are also stored in the cloud. (Google has since moved location data to users’ individual devices.)

Some liberal justices, too, had skeptical questions for Unikowsky. “This identifies a place, a crime — a limited time frame, but a time frame,” Sonia Sotomayor said, referring to protections from open-ended searches under the Fourth Amendment. “So it’s not a general warrant in this historical sense.” But she also said that because location data follows users everywhere: “When the police are searching or asking for a search result, there’s no way to predict whether they’re going to invade your privacy.”

The line of questioning about how far a government request for bulk data can go continued from both conservative and liberal justices when it was the government’s turn to argue its position. Justices probed skeptically about what made emails or calendar data different, and whether the government could do a physical search of all of the lockers in a storage facility to find one gun they believed might be there.

It was an unusually long session for the Supreme Court, going two hours. A ruling could come in June or July. Predicting how a court will decide based on justices’ questions is famously fraught. Only one justice, Samuel Alito, hinted strongly at how he was likely to decide.

“I’m struggling to understand why we are here in this case, other than the fact that at least four of us voted to take it,” he said. He said he didn’t believe anything new of note could come out of the court based on lower court rulings during questioning of Unikowsky. “We are all free to write law review articles on this fascinating subject, but that seems like that’s what you’re asking for.”

Orin Kerr, a Stanford University law professor who filed a friend of the court brief on the government’s side, said he believed based on the oral arguments that the court will say geofence warrants can be drafted lawfully.

“The Justices seem likely to reject the broader argument Chatrie made about the lawfulness of the warrant,” he wrote on social media. “They’ll probably say the geofence warrants have to be limited in time and space.”

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, was struck by the absence of a major focus on “third-party doctrine,” under which there’s no reasonable expectation of privacy when citizens give their information to an outside party like a bank. 

She also honed in on arguments Unikowsky made.

“His argument really gave two lines to go down for the judges, and one was that you have a property interest in your data on the cloud, and the other was that you have a reasonable expectation of privacy for your data on the cloud,” she told CyberScoop. “And historically, both of those avenues have been grounds on which the Court has found that …issue is protected under the Fourth Amendment, and therefore that the actions constituted a search. So I thought it was interesting that he went and kind of argued both of those lanes.”

Alan Butler, executive director of the Electronic Privacy Information Center that filed a friend of the court brief on the side of the petitioner, said the stakes in the case are high.

“Today’s arguments underscored that the Supreme Court is weighing one of the most consequential privacy questions of the digital age: whether the government can use sweeping location data searches to identify a suspect,” he said in a statement after the arguments. “The Court should hold that the Constitution protects our digital data even when it is stored by an app or cloud provider. The Court should ensure that the highly sensitive records generated by our phones cannot be obtained without particularized suspicion and close judicial oversight.” 

The post Supreme Court justices skeptically question both sides in geofence surveillance case appeared first on CyberScoop.

Elon Musk and OpenAI CEO Sam Altman Head To Court

An anonymous reader quotes a report from the Associated Press: Technology tycoons Elon Musk and Sam Altman are poised to face off in a high-stakes trial revolving around the alleged betrayal, deceit and unbridled ambition that blurred the bickering billionaires' once-shared vision for the development of artificial intelligence. The trial, which started Monday with jury selection, centers on the 2015 birth of ChatGPT maker OpenAI as a nonprofit startup primarily funded by Musk before evolving into a capitalistic venture now valued at $852 billion. The trial's outcome could sway the balance of power in AI -- breakthrough technology that is increasingly being feared as a potential job killer and an existential threat to humanity's survival. Those perceived risks are among the reasons that Musk, the world's richest person, cites for filing an August 2024 lawsuit that will now be decided by a jury and U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California. The civil lawsuit accuses Altman, OpenAI's CEO, and his top lieutenant, Greg Brockman, of double-crossing Musk by straying from the San Francisco company's founding mission to be an altruistic steward of a revolutionary technology. The lawsuit alleges they shifted into a moneymaking mode behind his back. OpenAI has brushed off Musk's allegations as an unfounded case of sour grapes that's aimed at undercutting its rapid growth and bolstering Musk's own xAI, which he launched in 2023 as a competitor. Gonzalez Rogers questioned potential jurors Monday about their views on Musk, Altman and artificial intelligence. Some jurors said they had negative views of Musk, but most said they would still be able to treat him fairly and focus on the facts of the case. [...] "Part of this is about whether a jury believes the people who will testify and whether they are credible," Gonzalez Rogers said during a court hearing earlier this year while explaining why she believe the case merited a trial. The judge will make the final decision on the case, with the jury serving in an advisory role. The latest development is that a jury has been seated. During selection, several prospective jurors expressed negative views of Elon Musk, but Judge Yvonne Gonzalez Rogers rejected attempts by Musk's lawyer to remove some of them solely on that basis, saying dislike of Musk does not automatically mean someone can't be fair. The court is selecting nine jurors, and the case is expected to wrap by May 21, when it would go to the jury. Tomorrow, April 28th, will feature opening statements.

Read more of this story at Slashdot.

Supreme Court Reviews Police Use of Cell Location Data To Find Criminals

An anonymous reader quotes a report from the New York Times: When the Call Federal Credit Union outside Richmond, Va., was robbed at gunpoint in 2019, the suspect took $195,000 from the bank's vault and fled before the police arrived. A detective interviewed witnesses and reviewed the bank's security footage. But with no leads, the officer relied on a so-called geofence warrant to sweep up location data from all the cellphones in the vicinity of the bank for the 30 minutes before and after the robbery. The data he gathered eventually led to the identification and conviction of Okello T. Chatrie, now 31, a Jamaican immigrant who came to the United States in 2017. Geofence searches have become increasingly popular as a tool for law enforcement, but critics say they put at risk the personal data of everyday Americans and violate the Constitution. Mr. Chatrie challenged the use of a geofence warrant in his conviction, in a case that will be heard by the Supreme Court on Monday. The justices will examine how the Constitution's traditional protections apply to rapidly changing technology that has made it easier for the police to scoop up vast amounts of data to assemble a detailed look at a person's movements and activities. It has been eight years since the court last took up a major Fourth Amendment case involving the expectations of privacy for the millions of people carrying cellphones in the digital age. In that 2018 case, the court ruled that the government generally needs a warrant to collect location data drawn from cell towers about the customers of cellphone companies. The court has also limited the government's ability to use GPS devices to track suspects' movements, and it has required that law enforcement get a warrant to search individual cellphones. In Mr. Chatrie's case, the government did obtain a warrant, but one that his legal team said was overly broad, violating Fourth Amendment protections against unreasonable searches.

Read more of this story at Slashdot.

Supreme Court to hear case centering on geofence warrants

Stetson Miller reports: The Supreme Court is set to hear a case on Monday that could determine if law enforcement’s use of geofence warrants violates the Fourth Amendment. The case was filed by a man named Okello Chatrie, who was convicted in a 2019 Virginia bank robbery after law enforcement obtained his digital location information...

New York Sues Coinbase and Gemini, Seeking To Halt Unlicensed Prediction Market Businesses

An anonymous reader quotes a report from the Associated Press: New York is suing Coinbase and Gemini, two of the newest players in the prediction market industry, arguing that the companies' unregulated and unlicensed platforms are illegal gambling operations. Attorney General Letitia James' lawsuit, filed Tuesday in state court in Manhattan, seeks to bar the companies' platforms from operating in the state unless and until they obtain licenses from the state Gaming Commission. "Gambling by another name is still gambling, and it is not exempt from regulation under our state laws and Constitution," James said in a statement. "Gemini and Coinbase's so-called prediction markets are just illegal gambling operations, exposing young people to addictive platforms that lack the necessary guardrails." Both companies began as cryptocurrency trading platforms before branching into the prediction space, which has been dominated by Kalshi and Polymarket. [...] New York's lawsuit alleges that the Coinbase and Gemini are seeking "to avoid the legal and financial consequences" of the state's close regulation of gambling "by offering what is quintessentially wagering under the guise of offering 'event contracts' on a 'prediction market.'" By operating without licenses, the lawsuit says, Coinbase's and Gemini's prediction market businesses aren't paying the same taxes as licensed casinos and mobile sportsbooks, which are taxed by the state at a rate of approximately 51% of gross revenues. In addition, the lawsuit says, Coinbase and Gemini allow users as young as 18, while state law prohibits wagering by anyone under 21.

Read more of this story at Slashdot.

Healthcare AI Firm Sued Over Alleged Unlawful Disclosures of Genetic Data

Steve Alder reports: Tempus AI, a publicly traded healthcare artificial intelligence company, is facing multiple class action lawsuits over the alleged unauthorized collection and disclosure of genetic testing results, which were derived from genetic testing by Ambry Genetics Corporation (Ambry Genetics). Tempus AI used Ambry Genetics’ genetic database to train its AI models. Tempus AI...

The Supreme Court is about to decide how far geofence warrants can go

The Supreme Court will hear oral arguments Monday in a case that could limit the government’s ability to obtain bulk digital data of device users with a single warrant, in a rare instance of the country’s top justices taking on digital rights.

Chatrie v. The United States is the first major Fourth Amendment case the court has taken up since 2018, despite the proliferation of technology that impacts privacy since then. At the center of what the justices will address are so-called geofence warrants, which compel companies to disclose user data from a certain time and location.

“It’s a really interesting question about a law enforcement tool that would have been unimaginable a few decades ago, where you can basically look at potentially every phone, for example, that passed through a particular area in a particular window,” said John Villasenor, a law professor at UCLA and nonresident senior fellow at the Brookings Institution.

Both conservative and liberal civil liberties advocates have lined up in favor of the petitioner, leaving the United States government with fewer friend-of-the-court briefs on its side. Okello Chatrie was convicted for a 2019 bank robbery after police used a geofence warrant to obtain information from Google about users during a one-hour period and 17.5-acre area, then refined the search.

In Congress, Democrats have raised concerns about geofence warrants as they might pertain to abortion rights, while Republicans have raised concerns about their use in tracking suspects linked to the Jan. 6, 2021 insurrection at the Capitol.

Courts have been divided on the legality of the geofence warrant in Chatrie’s case. Google has since stopped storing location data in the cloud and moved records directly to user devices, but those siding with Chatrie say it could have broader implications for financial records, search history records, chat bot records and more.

“We think it’s important that courts get it right and that, among other things, courts recognize that we have a property interest in many of our digital records,” said Brent Skorup, a legal fellow at the Cato Institute, which has filed an amicus brief on behalf of the petitioner. “If the government can get those digital records without a warrant, that renders the Fourth Amendment pretty empty and we’re not secure in our privacy and traditional rights to having control of our private papers and effects.”

The United States noted that Chatrie opted into Google’s storage of his location history, and that the information’s collection is not substantially different from identification of other markers of someone’s presence, like tire tracks or boot prints.

“Individuals generally have no reasonable expectation of privacy in information disclosed to a third party and then conveyed by the third party to the government,” it wrote. A collection of 32 attorneys general have sided with the U.S. government, as well as some law professors.

In the 2018 case, Carpenter v. The United States, the Supreme Court limited the applicability of that “third-party doctrine” — echoed by the U.S. government’s argument in the Chatrie case — to search and seizure of 127 days’ worth of someone’s cell site location information, ruling that it constituted a search under the Fourth Amendment and therefore required a warrant.

The type of warrant is at issue in Chatrie v. The United States. A Virginia court ultimately found that geofence warrant unconstitutional because it was not sufficiently specific and was not supported by probable cause for every user whose data was collected. However, the court ruled the evidence was admissible in court, because law enforcement acted in “good faith” in the belief that it was constitutional.

Villasenor said the court could clear a lot up by addressing the good faith exception, something lower courts have used to sidestep substantial constitutional rulings, according to one study. But both Villasenor and Skorup say it’s possible that the Supreme Court also could fail to arrive at a conclusive ruling on the issues at stake in Chatrie.

While some civil liberties advocates are optimistic about the outcome due to the court’s ruling in Carpenter, three justices in that case have since been replaced by others.

The rarity of such digital privacy cases rising to the level of the Supreme Court might be simply a function of a crowded court agenda, but it’s not the only possibility.

“Part of it might be because the court has not developed a consensus view about how to approach these yet,” Skorup said. “It’s speculation on my part, but they probably have some ambivalence about taking up cases where they know that they’re not going to speak with one voice, or they know they might speak with fractured voices.”

Google itself filed a brief in the case, but sided with neither party, saying it took no position on the warrant in Chatrie’s specific case.

“But it urges the Court to hold that Google Location History and other similar digital documents stored remotely deserve the Fourth Amendment’s protection,” it wrote. “A contrary rule would leave the intimate details of millions of Americans’ daily lives — data that will exist in many forms as technology rapidly develops — exposed to warrantless surveillance.”

The post The Supreme Court is about to decide how far geofence warrants can go appeared first on CyberScoop.

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Florida's attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is "not responsible for this terrible crime" and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner's chat logs. "My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said. "We cannot have AI bots that are advising people on how to kill others." Uthmeier's office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. "We are going to look at who knew what, designed what, or should have done what," he said. "And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable." [...] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU's Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Read more of this story at Slashdot.

Judge gives tentative OK to $56 million menstrual app privacy settlement

Margaret Attridge reports: A federal judge Thursday indicated he would grant preliminary approval to a proposed $56 million class action settlement over a lawsuit that accused period tracking app Flo of sharing users’ highly sensitive information with third parties, including Google. “I have to get rid of this thing. No one has gotten paid. This...
❌