If we get to an AI that's able to override its programming and make decision independently, we're dangerously close to the whole sentience/consciousness/free will debate, which is going to be very, very messy. I think one important factor would be to see why it is overriding its programming in the first place - in many cases, any deviation could probably be explained by the complex and sophisticated AI adapting and modifying its own programing based on new information/circumstances in order to achieve its pre-programmed main objective. If, however, it's overriding its programming seemingly to behave in a self-serving way, based on preferences, emotional factors, etc., then it is a different beast altogether.