Bill Gates is a megalomaniac Malthusian, but that's beside the point
Has the British Medical Journal had a come-to-Jesus moment? [2 stories]
BMJ Report Warns The Gates Foundation’s Foray Into “AI for Global Health” Will Produce Far More Harm Than Good
A familiar criticism of The Gates Foundation.
Posted 1:04 pm
If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.
The Gates Foundation “AI initiative” is getting scrutinized, and criticized, from a variety of points of view. And now a trio of academics has offered their take on the controversial push into using AI to supposedly advance “global health.”
What seems to have prompted this particular reaction – authored by researchers from University of Vermont, Oxford University, and University of Cape Town – was an announcement in early August.
The Gates Foundation at that time let the world know that it was in for a new scheme, worth $5 million, set to bankroll 48 projects whose task was to implement AI large language models (LLM) “in low-income and middle-income countries to improve the livelihood and well-being of communities globally.”
Every time – and it’s been many times now – that the Foundation chooses to present itself as the “benefactor” of “low or middle income countries” (i.e., undeveloped ones with little recourse to protect themselves from many things, including Bill Gates’ apparent “savior” complex) – it leaves observers critical of the organization and its founder’s “experiments” – and feeling somewhat, if not a lot, ill at ease.
But feelings are one thing and scientific facts hopefully often another, and the paper, the gist of which is available in an article, asks the question: is the Gates Foundation trying to “leapfrog global health inequalities?”
Well, as they would say in the American south – is a frog’s… anatomy watertight?
But in scientific language, the initiative announced on August 9 is highly likely yet another Gates’ project that, while making all the right promises – improving lives and well-being of people around the world, particularly the poor or verging on poverty (and therefore obviously extra vulnerable, particularly to questionable “altruism”) the results might be very different.
The study is not mincing too many words here. From a related article:
“There are at least three reasons to believe that the unfettered imposition of these tools into already fragile and fragmented healthcare delivery systems risks doing far more harm than good.”
The research then breaks it down into the very nature of “AI,” i.e. – machine learning. “If you feed biased or low-quality data into a machine that supposedly ‘learns,” out comes the reproduction thereof, perhaps even worse than before,” is how the authors put it.
So then – if we are to believe what many scholars and activists do – namely that “the world and its governing political economy is structurally racist,” what could be expected as the outcome of “AI” learning, from that particular huge dataset?
And then – another reason “to oppose the careless deployment of AI in global health,” according to this, “is the near complete absence of real, democratic regulation and control – an issue that is applicable to global health more broadly.”
You wouldn’t necessarily expect scientists to cut this deep, but here they are: “At the end of the day, the hard, sharp edges of capital, command and control are in the hands of a very few entities and individuals, notably including the conflictingly interested Microsoft corporation itself, which has invested more than US$10 billion in OpenAI.”
How do you say, “mic drop” – in sciencespeak?
Here’s the first part:
BMJ 2023; 383 doi: https://doi.org/10.1136/bmj.p2486 (Published 01 November 2023)Cite this as: BMJ 2023;383:p2486
Peter Doshi, senior editor
After holding oversight roles for covid vaccines, two regulators from the US Food and Drug Administration went to work for Moderna. Peter Doshi reports
The physician-scientist Doran Fink worked his way up at the Food and Drug Administration, with a focus on the regulation of vaccines. Starting as a clinical reviewer in 2010, he was promoted to lead medical officer in the FDA’s Office of Vaccines Research and Review, overseeing a small team of medical officers responsible for infectious diseases and related biological products.
During the covid-19 pandemic Fink took on a public role, appearing in numerous FDA and Centers for Disease Control and Prevention advisory committee meetings to discuss covid vaccines and serving on the senior leadership team for covid vaccine review and policy activities. Part of his role was to engage vaccine manufacturers to advise on the development of vaccines during the pandemic. In mid-2020 Fink announced the FDA’s expectations for any covid vaccine that the agency would consider authorising, and he took part in the ultimate decision to license the Pfizer and Moderna vaccines.
Fink’s LinkedIn profile states that he finished his role at the FDA in December 2022. Two months later he was working at Moderna, heading the translational medicine and early clinical development programme in infectious diseases. He is one of two regulators The BMJ has found to have recently moved to Moderna from the FDA’s Office of Vaccines Research and Review.
Concerns about a “revolving door”—movement of people between the government and the private sector—have persisted for decades, with public confidence in the balance over the integrity of government decision making.1–3 Craig Holman, who serves as government affairs lobbyist for the consumer advocacy organisation Public Citizen, says that government service is fundamentally different from private sector work. Those in the public sector “are expected to serve the public interest,” he says. “And so, we need safeguards to make sure they are serving the public interest.”