It's only as dangerous as bad architecture allows it or a bad person wants it. What would be troubling is not the AI perse, but the autonomy given to it and even then, it would require a series of directives and patterns to be defined making the AI able to decide whether to chose from one or another path after working a specific set of information. AI can't build physichal things for example unless you merge it with autonomous robotics and still, they would limited by their energy and space to do any harm.
Let's say that a concept like Skynet or HAL, where an AI would take over everything, in reality most likely it would only be able to kill sattelites and internet, things that could, in theory, be overrided back to safety fairly quickly. Even this, could be prevented by a "simple" directive on the AI code.
Have you ever watched that movie A.I.? That 3 law concept is actually more accurate than AI grewing uncontrolably