Can we let algorithm take decisions we cannot explain?

If XAI means choosing algorithms which are more transparent to inspection, such as decision trees or linear regressions, then XAI is great. But if it means continue to use complex algorithms (such as Deep Neural Networks) and try to give an intuition of how the algorithm is working, then I think XAI can be risky.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.