Generalizable healthcare AI (artificial intelligence) is capable of consistently performing regardless of any bias including race, equipment, provider, or clinical setting. Achieving generalizability requires access to highly diverse data sets typically found only across multiple institutions. Timely access to data is proving to be one of the most significant barriers to delivering on the promise of AI in healthcare.
However, with the advent of confidential computing and privacy preserving analytics, there is an opportunity to leverage zero trust environments in which the technology enables a data owner and algorithm owner to work together in ways that require neither party to trust the other to protect their data or their intellectual property.
Despite stringent security policies and procedures, academic medical centers face real risk when making their data available for algorithm validation. Likewise, algorithm developers must carefully guard the intellectual property of their models. So how can a “zero trust” environment be leveraged — assets be kept safe and secure — in order to accelerate generalizable healthcare AI?
WHAT WILL YOU LEARN
The webinar will take place live on Thursday, April 29th, 2021 at 9:00 AM PST / 5:00 PM BST / 6:00 PM CET and will be recorded for on-demand viewing. If you would like to attend live or received a link to the recording after the event, please register now.