When autonomous algorithms act within socio-digital institutions and take wrong decisions, what are the consequences for legal liability? Is a uniform liability regime required, or should fragmentation along sectoral rules prevail? The article argues for a middle path between the Scylla of one-size-fits-all and the Charybdis of situationism. For an appropriate diversity of liability regimes, this article draws on a typology of machine behavior developed in IT-studies and simultaneously on sociological and philosophical theories which suggest identifying the foundations for three emerging socio-legal institutions in (1) personification of non-human actors, (2) human-machine association as an emergent social system with the qualities of a collective actor, and (3) distributed cognition in the interconnectivity of algorithms. The liability regimes proposed in this article will have a considerable impact on the digital public sphere and its regulation. The differentiating approach will contribute significantly to the digital constitution that is currently emerging.