Turing Test doesn't actually prove anything ...
... does it?
The impression I get, and what I'm able to find online doesn't outright confirm or refute it, is that Turing's test was about perception, not reality.
In other words, it's about us, not the robot. For an AI to pass the test, it must be functionally indistinguishable from a conscious, intelligent human.
That doesn't mean the AI is, in fact, a human - just that we ought to treat it as such.
It's a fine distinction, I admit, but I think it goes more to who we are and ought to be, rather than what the AI is or seems to be.
In this case, Ava obviously passes any reasonable Turing Test, and should've been treated humanely by her creator.