Despite stringent security policies and procedures, academic medical centers face real risk when making their data available for algorithm validation. Likewise, algorithm developers must carefully guard the intellectual property of their models. So how can a “zero trust” environment be leveraged — assets be kept safe and secure — in order to accelerate generalizable healthcare AI?




Despite stringent security policies and procedures, academic medical centers face real risk when making their data available for algorithm validation. Likewise, algorithm developers must carefully guard the intellectual property of their models. So how can a “zero trust” environment be leveraged — assets be kept safe and secure — in order to accelerate generalizable healthcare AI?



