The WHO worries that using AI in healthcare might be tricky for poorer countries, like how playing a complex online game like “Book of Dead” can be unpredictable and risky. It’s important to think about what could happen in both cases.
The World Health Organization (WHO) released a report about advanced AI technologies, focusing on large multi-modal models (LMMs). Here’s a simpler explanation:
- Who’s Making AI?
- The report stresses that wealthy countries and big tech companies are developing these AI technologies.
- Diverse Data is Important
- These AI systems are only trained with data from rich countries. As a result, they might need to work better for people in poorer countries.
- Risk of Bias
- AI must learn from a wide range of data. This includes data from less wealthy places. It might only be helpful or fair to some if it does.
In short, the WHO is saying that it’s essential for the development of AI to include diverse perspectives and data. This helps ensure that AI can be useful and fair to people worldwide. Tust-like players need a good plan and understanding to play the book of dead demo play; we need a thorough approach and strategy to deal with AI in healthcare, especially in uncertain and uneven situations.
Alain Labrique from the WHO says we need to be careful with technology in healthcare. It’s like playing a strategy game where each choice matters a lot. We should ensure new tech doesn’t increase unfairness or wrong ideas in different countries.
The WHO is reminding us that when we make and use AI, we should ensure it’s fair for everyone. This is like ensuring all game parts are fair and balanced.
Read their report to learn more about the WHO’s views and rules on AI. You can also look at articles from trusted sources like Nature or TechCrunch.
Overtaken by Events
The WHO issued its first guidelines on AI in health care in 2021. Yet, less than three years later, the organization was prompted to update them. This was due to the rise in the power and availability of LMMs. Generative AI, also called generative AI, can process and produce text, videos, and images. The popular ChatGPT chatbot runs on one of these models.
LMMs have been “adopted faster than any consumer application in history,” the WHO says. Health care is a popular target. Models can produce clinical notes, fill in forms, and help doctors to diagnose and treat patients. Several companies and healthcare providers are developing specific AI tools.
The WHO has made guidelines for countries about AI in healthcare. They want to make sure AI helps health, not hurt it. They’re worried about two things:
- Companies Rushing: Some might hurry to make AI apps first, even if they’re unsafe or don’t work well.
- Bad Info Problems: If AI learns from wrong or fake info, it could spread wrong ideas online and elsewhere.
These guidelines are to help avoid these problems.
Jeremy Farrar, the principal scientist at WHO, says that AI can help healthcare. But, people making and managing it must be careful about the risks.
The agency warns that the operation of these powerful tools must be made available to tech companies. Labrique from WHO says that all countries’ governments should control how AI is made and used. Community groups and patients should help make and manage AI in healthcare. They should also help manage AI in healthcare.
Crowding out Academia
The WHO worries that big companies are too involved in making advanced AI for health care. This might only be good for some. Here’s a simple breakdown of their report:
- Costly Technology
- Training, using, and maintaining these advanced AI systems costs money.
- Big Companies Leading
- Large companies are developing this technology because it’s so expensive. They have more resources compared to universities or governments.
- Academics Moving to Industry
- Many experts, like people with PhDs and university professors, are leaving their jobs in academia to work in the industry. This is because the industry offers better opportunities and resources for AI research.
- Concerns
- WHO is worried that this trend might lead to these big companies having too much control over AI technology, especially in health care. This could mean that the needs of less wealthy countries or smaller organizations might need more attention.
In essence, WHO is cautioning that if only the big companies are in charge of AI technology, it might not be developed in the best interests of everyone, especially those in less wealthy countries.
The guidelines recommend that independent third parties perform and publish mandatory post-release audits of LMMs that are deployed on a large scale. Such audits should assess how well a tool protects both data and human rights, the WHO adds.
The WHO’s report has two leading suggestions:
- Ethics Training for Developers:
- People making AI for healthcare and research should learn about ethics like doctors. This ensures they understand the right and wrong ways to use AI.
- Register Early AI Versions:
- Governments might ask the creators of AI to show their early work. This is to help share all kinds of results, even the ones that could be more successful. It’s to avoid only showing the good stuff. It’s to avoid creating too much excitement or wrong impressions.