🤔 AI Ethics: Conundrum Ahead

Plus: AI runs 10,000 experiments a day on bacteria, Everything Google just announced, IBM intros a slew of new AI services, Elizabeth Holmes: Theranos could have revolutionized Healthcare.

Good morning! 

Welcome to Healthcare AI News, your weekly dose of the latest developments and headlines in the world of Healthcare AI.

In this issue, we explore:

✅ Feature: The ethical implications of using AI in Healthcare Insurance

 Headline: How will AI change the payer industry? 18 leaders explain

 Industry: Simplifying the business of care using AI and Blockchain

✅ Tech: Who needs a Data Scientist?

✅ Deal Flow: Syneos Health to be acquired for $7.1B

Be sure to read on to see this week's Top headlines, Industry, Tech, and M&A news.

Let's dive in.

HEADLINE ROUNDUP

iStock

iStock

  • How the responsible use of AI can supercharge disability inclusion at your company (Read more)

  • Here’s everything Google just announced: A $1,799 folding phone, A.I. in Search and more (Read more)

  • How will AI change the payer industry? 18 leaders explain (Read more)

  • Humane’s new wearable AI demo is wild to watch — and we have lots of questions (Read more)

  • Samsung gets FDA clearance for irregular heart rhythm notifications (Read more)

  • AI identifies anti-aging drug candidates targeting 'zombie' cells (Read more)

  • Metamedicine: A chance for a healthier life for everyone (Read more)

  • Google promised to delete sensitive data. It logged my abortion clinic visit. (Read more)

  • AI runs 10,000 experiments a day on bacteria to speed up discoveries (Read more)

  • NextGen Healthcare says hackers accessed personal data of more than 1 million patients (Read more)

  • Question for your doctor? AI can help (Read more)

  • Will generative AI wreck or rekindle the doctor-patient relationship? (Read more)

  • MyFitnessPal and Google Health Connect launch Integration for Continuous Glucose Monitoring (CGM) (Read more)

💡 Keep reading to catch up on Industry, Tech & Deal flow

🌟 Advertise With Us 🌟

Boost your brand amongst Healthcare's influential circle! Our diverse subscriber base boasts top executives, key decision makers, and visionary professionals from leading organizations – the ultimate platform for your brand's success. 🔥

THE FEATURE

The Ethical Implications of Using AI in Healthcare Insurance Decision-Making

iStock

iStock

As an industry rich in all types of data, healthcare is seeing an explosion of AI applications and use cases – but what are the ethical constraints?

Use cases for AI in health insurance range from ways to improve health insurance cost efficiency, more accurate risk modeling, reducing waste, and driving preventative care initiatives. AI applications are also being used to detect fraudulent claims – a $100 billion per year problem.

As the industry is surging full-steam ahead into AI-driven technology advancements, it is important to consider the ethical implication of using AI to drive healthcare decision-making processes.

Ethical Concerns

The decision-making of all kinds – AI or not – has real, personal consequences for the insured population. It is important that the industry looks beyond the excitement of new technologies and takes time to consider the potential effects. There are three main ways categories of ethical concern to consider.

  1. Bias and discrimination. We know that human-driven processes are often inherently biased. AI-driven applications are still created by flawed humans, which means that they can also have biased results. Large, diverse pools of source data help to reduce bias, but it may not ever be zero. Bias becomes most worrisome when it can lead to discriminatory practices – a big danger in the healthcare insurance industry. One idea to reduce bias is called the “centaur” approach. Basically, it states that if a system has the capability to “materially impact someone’s life” it has a human being in the loop who understands why decisions are made.

  1. Data privacy and security. To fully harness the incredible potential of AI, large amounts of personal healthcare data must be leveraged. To be used in this way, all of the data must be extracted, decrypted, and analyzed – while ensuring that the privacy of patients and healthcare providers are protected. As often happens with emerging technologies, policymaking often lags behind new technology, and many issues are still unclear when it comes to data ownership and permitted usage.

    Currently, data audits are a valid way to ensure that companies are ethically using available data. These third-party audits, although retrospective, evaluate privacy practices and are usually paired with risk assessments.

  1. Transparency and explainability. These two terms are often used interchangeably, both focusing on making sure that an AI model is open and visible to stakeholders. For health insurance, this means being able to explain and determine exactly how a decision was made and what data went into the determination. To make this happen, AI data scientists and insurers need to be able to communicate well – even though they sometimes speak different “languages.”

    One useful transparency tool is another type of audit called “algorithmic auditing”. This is a systematic process that documents each step in an AI model, reviews the mechanisms, and then assigns a “quality guarantee seal” to models.

A Game The Role of Regulation and Ethical Guidelines

Although regulations are still largely in development, policy discussions lean towards applying the same rigorous scrutiny to AI software and algorithms as is currently applied to clinical trials and research studies.

  • The best possible quality of data should be used to reduce the risk of discrimination.

  • The details surrounding the purpose, development, and risks of AI and machine learning applications should be transparent.

  • There should be appropriate human oversight of processes to reduce inherent risk.

A Game-Changer Case Studies of AI in Healthcare Insurance

  • In a real-world example, U.S. patients were assessed using AI systems to determine their level of illnesses and whether they were of high- or low-risk categories. The AI algorithm assessed 200 million people based on previous healthcare spending levels to determine their predicted level of need. The algorithm determined that more white patients were at high risk than black patients. Hospitals and health insurance companies proceeded to use this data to determine which patients could benefit from a high-risk care management program, offering special care programs to people with chronic illnesses.

    It was found that the AI algorithm and data was inherently flawed and that more black patients should have been offered the high-risk care program. The data itself was biased because black patients often did not request medical attention (hence less spending) or did not trust healthcare systems. In this case, the AI algorithm inadvertently contributed to healthcare disparity.

    If the algorithm had used actual clinical data rather than cost data, black patients receiving additional help would have increased from 17.7% to 46.5%. When the results of the study identifying bias came out, eight of the top U.S. health insurance companies, two major hospitals and the Society of Actuaries declined to comment

  • Another very thought-provoking issue lies in the issue of legal liability. When the decision-making process is no longer done by humans if there is a mistake resulting in harm or negligence, who is responsible? It very much depends. In medical care AI products, the FDA is regulating their usage to a degree, however the AI is continuously learning – and the product may not look the same next year as it does this year.

    If an AI product is purchased from a tech company for use in healthcare insurance, and a decision leads to an adverse event – is the insurance company or the tech company liable? Can an algorithm be sued? There aren’t clear answers until legal precedents are set, but the issues bear consideration and caution as the industry moves into the future.

INDUSTRY NEWS

Difood

difood

  • Highest paid Health insurance CEOs: Six CEOs raked in a record $123 million last year (Read more)

  • Elizabeth Holmes said she still believes Theranos could have revolutionized healthcare and is working on new inventions: 'I still feel the same calling to it' (Read more)

  • MIT research team develops Microneedle Patch “Printer” for vaccine delivery (Read more)

  • Mayo Clinic platform expands its distributed data network to partner to globally transform patient care (Read more)

  • Machine Learning helps identify Pneumonia as driver of COVID-19 deaths (Read more)

  • U.S. death rate falls as COVID slips to 4th most common cause of death (Read more)

  • AI-assisted ultrasound may improve breast mass triage in low-resource settings (Read more)

  • Simplifying the business of care using AI and Blockchain (Read more)

  • A mental illness in your 20s and 30s could mean a greater chance of heart attack and stroke (Read more)

TECH NEWS

Giphy

Giphy

  • Digital twins emerge as the latest tool for growing cities smarter (Read more)

  • How cloud AI infrastructure enables radiotherapy breakthroughs at Elekta (Read more)

  • IBM intros a slew of new AI services, including generative models (Read more)

  • PrivateGPT - GitHub Repo (Read more)

  • Who needs a data scientist? Let Microsoft’s code interpreter do the work for you (Read more)

  • Is critical thinking the most important skill for software engineers? (Read more)

  • Here’s how UPMC’s CTO is thinking about AI in Healthcare, from sorting data to fielding questions (Read more)

  • You don't need Scrum. You just need to do Kanban right (Read more)

  • AI this week: Doomers vs. builders, LLMs, and Healthcare, and fails search (Read more)

  • AI: Human Augmentation in Healthcare (Read more)

  • The ultimate guide to automatic accessibility testing in CI/CD for React Apps (Read more)

DEAL FLOW

Giphy

Giphy

  • Syneos Health to be acquired by a private investment consortium for approximately $7.1 Billion (Read more)

  • Digital health company Babylon to go private with $34.5M in funding (Read more)

  • Upswing Health partners with XTRA to enhance its virtual therapy solution with AI-powered motion tracking (Read more)

  • ConcertAI's TeraRecon expands AI model and clinical collaboration capabilities of Eureka clinical AI (Read more)

  • Northwell launches ocular AI company with $12M investment (Read more)

  • Lavita AI raises $5M seed financing to launch the first patient-driven health information marketplace to accelerate life sciences innovation (Read more)

A SPECIAL MESSAGE FROM OUR PETS! 🐾

📢 Help us grow by sharing this AI-mazing Newsletter with your network. 📩

TOP 3 TWEETS OF THE WEEK