The study of the social and ethical impact of AI is still in its infancy and contributions to the field have to keep up with the continuous developments of the booming AI industry. Though it is widely used, it is hence not surprising that the concept of ‘AI governance’ is still rather vague and undertheorized. We suggest that ‘AI ethics’ can be defined as the field of applied ethics that is concerned with the ethical questions that arise in light of actual and conceivable AI systems. ‘AI governance’, then, could be seen as a subdomain of ‘AI ethics’, guided by the assumption that we can collectively influence the development of AI. In the literature, ‘AI governance’ often simply refers to the mechanisms and structures needed to avoid ‘bad’ outcomes and achieve ‘good’ outcomes with regards to the problems and issues already identified and formulated within AI ethics. We argue that, although this outcome-focused view captures one important aspect of ‘good governance’, its emphasis on the effects of governance mechanisms runs the risk of overlooking important procedural aspects of good AI governance. One of the most important properties of good governance is political legitimacy. Starting out from the assumption that AI governance should be seen as global in scope, this paper has a twofold aim: a) to develop a theoretical framework for theorizing the political legitimacy of global AI governance and b) to demonstrate how it can be used as a critical yardstick for assessing the (lack of) legitimacy of actual instances of AI governance. With regard to the former main aim, a basic presumption is that, whatever else we believe that global political legitimacy requires, it must at least be minimally democratic. Rather than defending a substantive first-order theory of global political legitimacy, our ambition is to spell out and defend some basic normative conditions that any satisfactory account of the political legitimacy of AI governance must respect. This is done by elaborating on the distinction between ‘governance by AI’ and ‘governance of AI’ in relation to different kinds of authority and different kinds of decision making, either employing AI decision-making or applying decision-making to AI development and deployment. We argue that insofar as we accept that political legitimacy at least must be minimally democratic, an account of the legitimacy of global AI governance must respect that governance of AI and governance by AI have a specific normative relationship and raise different normative demands.