There was much discussion about Watson and what it (he?) represented in terms of advancement in terms of computer-based artificial intelligence. (I don’t think anyone considers Watson a true AI.)
The concept of what most people consider a true AI is based on HAL from "2001: A Space Odyssey". We aren’t there yet.
What we keep discovering is that when we program systems, they don’t respond like humans. The example was given of the Google car that does not see or drive that a human being does.
If a computer evolved to the point where it could rule us, would it want to?
If a computer was developed that could develop improved programs for itself (or other computers), it might evolve so quickly that we might not realize or recognize it until too late.
Advances in hardware (processing power) has been the chief–not sole driver–in recent advances in computer "artificial intelligence", but the questions (and limitations) seem to have remained fundamentally unchanged in nearly 40 years.