Do you think we should be trying to develop AI that is BOTH smarter than us and independent (capable of making it's own decisions without any human control)? Or is that something that is bound to go wrong eventually?
Do you think we should be trying to develop AI that is BOTH smarter than us and independent (capable of making it's own decisions without any human control)? Or is that something that is bound to go wrong eventually?