Writing my own gestalt-sentient beings

Writing my own gestalt-sentient beings


In the mid-1990s I read a book with an idea that captured my imagination. In Verner Vinge’s “A Fire Upon the Deep” I read of a canine gestalt-sentient species called the Tine. A gestalt-sentient species, as portrayed in that novel, was one that had a sort of group mind or group consciousness. So an “individual” might be made up of five or six beings who shared a consciousness.

What really intrigued me was that if one of these five or six died, then the dynamics of the consciousness changed: its very psychology. Likewise if a new creature entered into this group mind. So if a creature with genetic traits that made it aggressive died and a new passive creature entered the consciousness, then this would impact the over-all being’s thinking and behaviour. Also, as components of the whole could die and be replaced, such a consciousness had the potential to live on as long as it could find component creatures to subsume.

In originally writing “Tempting in Shade” I had a vague concept of this in one of my characters but had decided to wait and explore it in the sequel. However feedback from my beta-readers suggested that in some cases I was providing insufficient information about certain characters. So the re-written novel is being expanded to include full disclosure of many things formerly hinted at (or even ignored).

In this rewritten version of “Tempting in Shade” I am experimenting fully with this concept of gestalt-sentience, but on humans and not some alien race. I figure that humans will increasingly integrate themselves with technology, including artificial intelligences. The traditional science fiction extrapolation of this idea has been cyborgs. But how would a cyborg think if it not only has the human component but several AI components? It can either be a master-slave model with one consciousness dominating the others or it would be some sort of gestalt-sentience.

Given the latter, what mechanisms would in place to facilitate it? What happens when new “self-aware” components are added or old ones are removed? Even still, what would happen if such a consciousness was split and prevented from reforming? Or if one component lost its sanity while the others remained sane?

Yes, a worthy idea to elaborate upon.