A recent article (1) argues that exclusion of race will reduce the power of both diagnostic and prognostic algorithms in clinical AI. However, the article appears to argue that the use of race in prognostic settings used for resource allocation and other decisions will increase inequity.
Decision-making in the presence of AI appears to result in contrasting outcomes if morality and equity are added in. These concepts are not well defined and are generally dependent on the decision-maker. This poses a significant problem for AI as we look forward into policy. If humans are going to value morality and equity as fundamental constructs in decision-making, then, AI may become less useful in the future. Mathematically, it has to be the case that a feature is useful or not in a model. Humans without morality, empathy or equity considerations in decision-making will have to use power and robustness as objective functions. For those more prone to these qualitative considerations may ultimately detach from AI.
The question from a policy perspective then is whether humans are ready to be robots. There does not appear to be much choice given the pace of development. Robots will be optimizers of defined objective functions and even if humans attempted to add noise to the algorithms, they will simply discard them over time.
There is a fork on the road — humans have to either embrace AI as a mathematical optimization technology and become robots, or remain a noisy human and that will substantially cap the use of AI in the future.