All the bad things aside you're right, we are trying to create something similar to a consciousness that can make decisions like a human being and for all the easy computational system logic you can give an AI the hardest thing to make it do is make human-like decisions. At the end of day it will learn and never repeat its mistakes again but then it loses some of its human nature which is why I feel like this is all just for betterment of co-existing systems rather than being a mimic of tribal nature systems. If the system did decide, however, to start eradicating other systems because it felt that they were threats to progression or because they just weren't necessary anymore that would certainly align with human thinking and how progression can sometimes mean discarding what isn't working anymore.