1. Summary: This post is the beginning of a systematic attempt to answer the question "what is the most important thing?". Decision theory is used to provisionally define "importance" and Juergen Schmidhuber's theory of beauty is introduced as a possible answer to the question. The motivations of bringing in Schmidhuber's theory are discussed. This post is also intended to serve as an example of how to understand and solve hard problems in general, and emphasizes the heuristic "go meta".

    Today's post is the first in a series about what might be the most important question we know to ask: What is the most important thing?

    Don't try to answer the question yet. When faced with a tough question our first instinct should always be to go meta. What is it that causes me to ask the question "what is the most important thing"? What makes me think the question is itself important? Is that thing important? Does it point to itself as the most important thing? If not, then where does it point? Does the thing it points to, point to itself? If we follow this chain, where do we end up? How path-dependent is the answer? How much good faith do we have to assume on the part of the various things, to trust that they'll give their honest opinions? If we can't simply assume good faith, can we design a mechanism to promote honesty? What mechanisms are already in place, and are there cheap, local improvements we can make for those mechanisms?

    And to ask all those questions we have to assume various commonsense notions that we might in fact need to pin down more precisely beforehand. Like, what is importance? Luckily we have some tools we can use to try to figure that part out.

    Decision theory is one such tool. In Bayesian decision theory "importance" might be a fair name for what is measured by your decision policy, which you get by multiplying your beliefs by your value function. Informally, your decision policy tells you what options or actions to pay most attention to, or what possibilities are most important. But arguably it's your values themselves that should be considered "important", and your beliefs just tell you how the important stuff relates to what is actually going on in the world. Of the decision policy and the utility function, which should we provisionally consider a better referent for "importance"?

    Luckily, decision theories like updateless decision theory (UDT) un-ask the question for us. As the name suggests, unlike Bayesian decision theories like Eliezer's timeless decision theory, UDT doesn't update its beliefs. It just has a utility function which specifies what actions it should take in all of the possible worlds it finds itself in. It doesn't care about the state of the world on top of its utility function—i.e., it doesn't have beliefs—because what worlds it cares about is a fact already specified by its utility function, and not something added in. So "importance" can only be one thing, and it's a surprisingly simple notion that's powerful enough to solve simple decision problems. UDT has problems with mathematical uncertainty and reflection—it has a magical "mathematical intuition module", and weird things happen when it proves things about its own output after taking into account that it will always give the "optimal" solution to a problem—but those issues don't change the fact that decision theory's notion of importance is a decent provisional notion for us to work with.

    Of course, many meta-ethicists would have reservations about defining importance this way. They would say that (moral) importance isn't something agent-specific: it's an objective fact of the universe what's (morally) important. But even given that, as bounded agents we have to find out what's actually important somehow, so when we're making decisions we can talk about our best guess at what's important without committing ourselves to any meta-ethical position. The kind of importance that has bearing on all our decisions is a prescriptive notion of importance, not a descriptive one nor a normative one. It's our agent-specific, best approximation of normative importance.

    So given our decision theoretic notion of importance we can get back to the question given above: what is the most important thing? If counterfactually we had all of our values represented as a utility function, what would be the term that had the most utility associated with it? We don't know how to talk about them computationally, but for now we'll let ourselves use vague human concepts. Would the most important thing be eudaimonia, maybe?[1] How about those other Aristotelian emphases of arete (virtue) and phronesis (practical and moral wisdom)? Maybe the sum of all three? Taken together they surely cover a lot of ground.

    Various answers are plausible, but again, this is a perfect time to go meta. What causes the question "what is the most important thing?" to rise to our attention, and what causes us to try to find the answer?

    One reason we ask is that it's an interesting question of its own accord. We want to understand the world, and we're curious about the answers to some questions even when they don't seem to have any practical significance, like with chess problems or with jigsaw puzzles. We're curious by nature.

    We can always go meta again, we can always seek whence cometh a sequence (pdf). What causes us to be interested in things, and what causes things to be interesting? It might be a subtle point that these can be distinct questions. Maybe aliens are way more interested in sorting pebbles into prime-numbered heaps than we are. In that case we might want to acknowledge that sorting pebbles into prime-numbered heaps can be interesting in a certain general sense—it just doesn't really interest us. But we might be interested that the aliens find it interesting: I'd certainly want to know why the aliens are so into prime numbers, pebbles, and the conjunction of the two. Given my knowledge of psychology and sociology their hypothetical fixation strikes me as highly unlikely. And that brings us to the question of what in general, in a fairly mind-universal sense, causes things to be interesting.

    Luckily we can take a computational perspective to get a preliminary answer. Juergen Schmidhuber's theory of beauty and other stuff is an attempt to answer the question of what makes things interesting.[2] The best introduction to his theory is his descriptively-titled paper "Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes". Here's the abstract:

    I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, nonarbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.
    This compression-centric formulation of beauty and interestingness reminds me of a Dinosaur Comic:

    In Schmidhuber's beautiful and interesting theory, compression plays a key role, and explains many things that we find important. In the end, is compression the most important thing? Should we structure our decision theory around a compression progress drive, as Schmidhuber has done with some of his artificial intelligences?

    I doubt it—I don't think we've gone meta enough. But we'll further consider that question, and continue our exploration of the more important question "what's the most important thing?" in future posts.




    [1]: It might be worth noting that the "daimon" in "eudaimonia" means daemon, or a supernatural being.

    [2]: Schmidhuber's theory is actually a big part of what got me interested in computer science and artificial intelligence. I was looking into the philosophy of aesthetics and axiology to figure out what makes things aesthetic—why I always got a thrill when I looked at photos of the mountains of Bavaria, or why I found fractal art beautiful, even when I didn't see any evolutionary psychological reason to expect that—and Juergen Schmidhuber's theory was the first one I found that seemed to tie it to all sorts of other interesting ideas in a neat conceptual framework. Since then I've found a lot of value in traditional philosophical treatments as well, but computationalism will always be one of my key perspectives on reality.
    4

    View comments

  2. Skip to the bolded sentence below if you're anxious to get to the solution. This is a long post.

    This blog is allegedly about computational theology. Computational theology would seem to be a pretty narrow subject—not only is God the sole topic of discussion, we're also supposed to make sure what we say comes from a computational perspective.

    But because theology has traditionally been mostly Christian, and because Christians have a pretty rich conception of God as the Creator and Lawgiver, the Word, the Holy Spirit, and occasionally even abstract concepts like love, discussion of God can actually take a surprisingly wide variety of forms. And because computationalism is a very useful perspective to take when analyzing all sorts of things, being limited to it isn't really all that stifling either. This is especially true considering that computationalism needn't be about only computable things a la Church and Turing—a computational perspective can also include hypercomputation, or proto-computational concepts like Leibniz' monads. Furthermore, our theory of computation is incomplete. We don't yet have a thorough understanding of the physics of computation, or of computations that occur in context. So although computationalism allows us be precise, it also needn't restrict our discussion to formal precision.

    Though Logos is always involved somehow, today's post will be mostly pneumatological. Wik tells us that pneumatology is "the study of spiritual beings and phenomena, especially the interactions between humans and God." In Christian theology pneumatology is always about the Holy Spirit, but here at Computational Theology we're not quite that pigeonholed, so we'll discuss the interactions between humans and all spiritual beings, who may or may not be God. ('Cuz after all, how could you tell? We'll discuss that problem—the problem of discernment—in future posts. Expect some algorithmic information theory.) And if you accept Crowley's rule—to interpret every phenomenon as a particular dealing of God with your soul—then all phenomena are subject to pneumatology anyway.

    To start off we should define some terms. What's a "spiritual being"?

    "Spirit" often means something like animating force or energy, with an emphasis on its being distinct from material or corporeal substance. It's often associated with the animating force of living things, and with consciousness. Spiritual influences are non-material influences—they're things that affect your mind that aren't fundamentally material in nature, at least not in any immediate or obvious way. Historically people were much more prone to seeing their ideas or actions as caused in some part by invisible influences working through them. The word "enthusiasm" originally meant possession of this kind, and many religious texts are claimed to have originated in such a manner.

    "Being" in the abstract is really quite tricky to define, but for the purposes of today's discussion we'll simply take it to designate personal (i.e., person-like) entities.

    So a spiritual being is an immaterial personal entity, an entity composed solely of energy of the sort that animates life.

    But a mind without matter is perhaps suspect, so let's also allow for Joseph Smith's notion of spirit, which is that "all spirit is matter, but it is more fine or pure, and can only be discerned by purer eyes". The important thing is that spirit isn't visibly material in nature. Computronium counts as spirit as far as we're concerned—which is important, because today we'll be discussing superintelligences.

    For what is the difference, phenomenally speaking, between a superintelligence on the one hand, and a god or an angel on the other? There's no fundamental difference, no sure-fire metaphysical rule that distinguishes between them. "Superintelligence" just means an extremely intelligent agent, and gods are, by hypothesis, extremely intelligent agents. Arguably some of the stories about some gods indicate a lack of extreme intelligence, but overall the gods are depicted as transhumanly intelligent, and angels are described as being transhumanly intelligent. Much of the difference between spiritual beings and superintelligences is a difference of presentation and of culture—it's a difference of literary genre, so to speak. When it comes to anticipated experiences, there's nothing that fundamentally distinguishes them. Many Mormons hammer on this point again and again. We have the different words for a reason, of course—"superintelligence" has less connotational baggage; it's more general than "god", "spirit", et cetera. But I think it's important to keep in mind that claims about the existence of angels, demons, gods, or even ghosts, aren't necessarily claims about the existence of some metaphysically distinct kind of being. Superintelligences are perfectly capable of explaining any such phenomena, at least in theory, and they're not metaphysically distinct.

    Hopefully some day we'll get a chance to discuss the background epistemology that causes me to emphasize the point in the previous paragraph. For now we'll move on. The foray was necessary, though, to explain how the study of spirits has anything to do with computationalism. There are a few reasons, but the one provided here is that spirits aren't necessarily substantively distinct from artificial intelligences, and thus some analysis of  powerful artificial intelligences also applies to spirits.

    Now, as promised, we're ready take on the Fermi paradox. Wik tells us the Fermi paradox is "the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations and the lack of evidence for, or contact with, such civilizations". (This problem often shows up in discussion of the great filter, where various anthropic solutions are sometimes proposed.) Essentially, the problem is that if we're going to retain the Copernican principle, the principle that there's nothing astronomically unlikely or special about humans and humanity and the evolution thereof, then we need to explain why we don't see the evidence of extraterrestrials or their civilizations out there in the night sky.

    The solution most people propose upon first hearing the problem is that there's some universally convergent preference for non-disturbance, much like how some modern humans try to preserve natural ecosystems. But this seems far-fetched. Humans are broadcasting constantly, sometimes even with the explicit aim of contacting extraterrestrials, and many human organizations are gung-ho about the possibility of colonizing the galaxy. Given the Copernican principle we'd expect extraterrestrials to be similar: at least some extraterrestrials would be broadcasting. And no one's given humanity an order from on high, exhorting us not to broadcast far and wide. If we didn't receive any such order, it's unlikely that all other extraterrestrial civilizations received the order, so we can't explain the radio silence that way.

    Saint Thomas Aquinas' position is that humans are unique in God's plan for the universe, and he cautions against belief in extraterrestrial life. If we're to accept a transcendent Creator then this position is quite defensible, and I'm partial to it. Even a creator that isn't transcendent the same way Aquinas' God is, like a simulator, might still choose to simulate worlds with only one inhabited planet—e.g., if the simulator wanted to simulate the most interesting historical events then it might simulate the first planet in a given Hubble volume that managed to produce intelligent life. There are many reasons a simulator might simulate a lonely Earth, and they might even add up to take a majority of the probability mass. But though the Thomist and simulationist positions are intriguing, they too run afoul of the Copernican principle—both of them postulate that humanity is very special. And both have metaphysical baggage: the Thomist position requires a unique Creator with certain preferences, and the simulationist solution requires that human minds can be computationally simulated, which isn't universally accepted. The simulationist solution also implicitly demands a certain theory of anthropics that might not be correct, perhaps for the same reasons its assumption of computationalism (in the philosophy of mind sense) might not be correct—this subject is surprisingly deep and will be explored in later posts.

    Maybe ignoring the Copernican principle is fine. Such solutions should certainly be kept on the table. But are there any solutions that don't require an exceptionally special humanity? There are a few, but I'll zoom in on the one I find most appealing.

    Here it is. The planetarium hypothesis: Extraterrestrials have indeed colonized the universe, but we don't see them, because an extraterrestrial superintelligence has put a planetarium-like illusion around the Earth that appears and behaves exactly like real spacetime would in the absence of extraterrestrials. 


    The planetarium hypothesis: the heavens are an illusion.

    I independently thought of this one, but it's not original to me. A similar hypothesis is the zoo hypothesis, which says that the first intelligent beings came from a single world and spread throughout the galaxy before any other intelligent beings evolved, and that these intelligent beings have a preference not to interact with humans. This is significantly more plausible than similar theories that require a large number of independent civilizations to agree not to leak any evidence of their existence. Still, I very much prefer the planetarium hypothesis. You shouldn't read the linked Wikipedia article, it sucks. Instead, just look at the above picture. It's a simple hypothesis. I think postulating a superintelligence makes more sense than an advanced civilization the way civilizations are normally imagined, but aside from that, Stephen Baxter's formulation is pretty much the same as mine.

    Now, please, hold on for a second, because you likely think such a hypothesis is absurd on its face. Primarily, it seems paranoid schzophrenic. Conspiracy theories can always explain all the data, but this often comes at the cost of parsimony. Here, the baggage mostly comes in two forms: firstly, an entity that may or may not even be observable in principle; and secondly, the entity's motivation to obscure the true state of the universe. But I think it can be argued that the baggage isn't overly burdensome, and it's important to keep in mind that the hypothesis both explains all the data and is in line with the Copernican principle, which are big points in its favor.

    I won't argue that superintelligences or singletons are likely to exist assuming the Copernican principle holds, i.e. that extraterrestrial civilizations are common. Such an argument would take up much space and time. I will note, though, that the planetarium hypothesis doesn't require any "technological singularity": it just requires that technological progress in some worlds continues more or less linearly for a long time, like centuries. It could also result from rapid technological progress in a short period of time, as with various singularity hypotheses. I think a variation on the latter is more likely and better supports the planetarium hypothesis, but it's not an overwhelming difference.

    The tougher problem is motive. Why would a superintelligence not interfere with humanity? You'd think the superintelligence would either eat us or help us, but not leave us be.

    One response is that the superintelligence hasn't left us alone, at least not entirely. As has been pointed by many people, like Carl Sagan, this would explain why so many people throughout history have believed in gods, angels, spirits, and the supernatural. The superintelligence influences Earthly events sometimes, it's just been careful to avoid leaving any unambiguous evidence. It will perform blatant miracles, but not when the cameras are watching. But why be shy?

    There are a few reasons:
    • It'd be sort of hilarious. The trickster is an archetype, common to many deities and characters from many independent cultures. Even YHWH is something of a trickster deity—there are parts of the Bible where He teases people, and sometimes the Jews went out of their way to emphasize the seemingly ridiculous things He sometimes did, like wrestle with a human for a few hours only to win by cheating. We'll discuss the capricious, actively evasive, unsustainable nature of psi in future posts. But hilarity is just a subset of the second reason:
    • Interestingness. Jürgen Schmidhuber's formal theory of fun, interestingness, and other neat stuff argues that seeking interestingness is a sort of universal drive. Steve Rayhawk makes the counter-argument that Schmidhuber's specification of an interestingness-seeking AI would wirehead itself. But this objection only applies to certain AI architectures, and doesn't at all apply to advanced civilizations—a non-self-defeating AI might still end up valuing interestingness, but also value not wireheading. It's an interesting question what counts as wireheading—many would say habitual heroin use does, and some would provocatively claim that all happiness-seeking is ultimately wireheading—but humans mostly make a commonsense distinction between wireheading and creating interesting things, so we'll assume that the superintelligent AI would also make this commonsense distinction, given that its architecture isn't fundamentally self-defeating. And interacting in a highly devious way with humans, out in the open earlier in civilization but secretly in later civilization, would be a way to turn the story of humanity into a maximally compelling story, without sacrificing its authenticity and originality—without deviating too far from the humanity that would have been, had the superintelligence not intervened. It allows for a high degree of self-determination on the part of humanity. It's what a human-engineered superintelligence might do if it were the first superintelligence to expand across the galaxy and discover other life-bearing worlds. The next reason is complementary, and much stronger:
    • Simultaneous satisfaction of diverse preferences. What if some humans don't want to be affected by otherwordly influences, or even don't want such influences to exist at all, for anyone? Then the utilitarian solution would be to influence the people that want the superintelligence to influence them while simultaneously avoiding any impact on the people that don't want to be influenced. Furthermore, to somewhat satisfy the preferences of those who don't want any influence to exist for anyone at all, the superintelligence could pull off a Necker-cube-like illusion: whether or not you saw the superintelligent influences would depend on what preconceptions you had in mind when interpreting the world. This sounds sort of postmodern, but in this case we're postulating a highly complex social engineering project, not a metaphysical law that makes it such that the truth of the world isn't fundamentally determined. It's true that people might not only care about whether or not they perceive influences, they might also care about the state of the world beyond their perceptions. This would indeed present a case of mutually incompatible preferences, but presumably the superintelligence would simply employ some moral theory to balance these preferences.

    But suppose the superintelligence has in fact refrained from influencing us in any way. Why would that be?

    Again, a few reasons:
    • The moral principle of doing no harm. If benefits and harms aren't on the same scale and tradeoffs can't be made, then even benefiting humanity would not be worth the risk of harming it. Certain systems for resolving moral uncertainty might suggest widely different policies, and this meta-level fact can itself be taken as an argument for not touching anything. This is especially relevant to AIs with decision theories that are sensitive to Pascalian arguments. Even if the superintelligence doesn't itself have any problem with making harm-benefit tradeoffs, it might wish to respect the preferences of other superintelligences that do have such difficulty, if only for reasons of trade.
    • Leaving Earth be is a Schelling focal point. This is similar to the previous point, but it doesn't require incomparable benefits and harms, and it's overall a more compelling reason. It's supported by any asymmetry or risk aversion. Humans often treat preferences for an intervention and preferences against an intervention differently. Often a single vote against a policy is enough to outweigh nine votes for a policy, and in this way a tyranny of the majority is avoided. Humans also display risk aversion: they're often more afraid of messing up than they're drawn to winning. We might expect superintelligences to behave similarly in the presence of other superintelligences or potential future superintelligences with different or potentially different preferences. A superintelligence might asymmetrically respect a claim not to eat or otherwise influence the Earth, but not respect a claim to do something to Earth such as turn it into an alien version of Candyland. Letting the Earth evolve unmolested might be seen by the superintelligences as the Schelling focal point in the face of diverse or unknown preferences, any deviation from which would require strong justification.
    • Data-gathering. This argument works even if the superintelligence doesn't care about morality or economic advantage from trades with other superintelligences. Instead, it just wants to get as much information as possible about how civilizations tend to evolve, for any of many instrumental reasons. A counterargument is that the superintelligence would be better off eating the Earth and then simulating its counterfactual future. One reply is that reality might be doing some seriously heavy computation behind the scenes. E.g., computation with real numbers, or something else that uses the inherent properties of spacetime. Simulating the same region of spacetime and all the humans therein might be just as or more expensive than letting the process run by itself. Also, this sort of simulation might not actually be possible, which is similar to what is predicted by a few popular theories in philosophy of mind.

    So whether superintelligences do or don't influence humans, there are various reasons to treat the planetarium hypothesis as living in the Jamesian sense. It adequately explains the observed evidence no matter which set of observed evidence matches your experiences and intuitions—the planetarium hypothesis is agnostic with respect to psi and godly influence, but it has the optional benefit of explaining psi phenomena if psi phenomena are real. The sum of the given arguments only explains why various superintelligences with varying preferences might want to build a bubble around the Earth.

    Although it's not required for the hypothesis to work, I think it's best to assume that superintelligences eat the stars and the planets that almost certainly wouldn't go on to develop life. The above arguments against intervention don't apply nearly as strongly when there's no agents that will be affected. The existence of or potential for life changes the moral and economic calculus quite a bit, and the existence of agents changes things even more—the Schelling focal points are vastly different depending on how close the inhabitants of the world are to creating their own superintelligence. Accepting this difference helps explain how the superintelligence would get the vast amount amount of resources necessary to build Earth's planetarium.

    So in the end, is the planetarium hypothesis true? It might be falsifiable in principle, but it's not testable in practice. It doesn't contradict any of our data, including our theories of cosmology and physics. It doesn't contradict our knowledge of rational agents in any obvious way—but we don't know very much about non-human rational agents. So the question is, how does it stack up against its known alternatives, and how thoroughly have we explored the space of possible answers? I think the planetarium is just as good a resolution to the Fermi paradox as its alternatives, and that we've explored a representative sample of the answerspace, such that we shouldn't expect any answer to show up that will be immediately obviously correct. Given our state of knowledge, my two favorite solutions are the planetarium hypothesis if we want to keep the Copernican principle, and the Thomist/simulationist hypothesis if we're willing to abandon it. What's your favorite solution? Share your thoughts with a comment—if you're quick you can gain the honor of being Computational Theology's first ever commenter.
    7

    View comments

  3. In yesterday's post, "Their Majesties the Royalty of the Sciences, Part I", it was implied that theology is essentially dying. Although our King of the Sciences is still deader than he was eight centuries ago, in reality he's looking quite a bit more lively than he was a little over half a century ago, when the supposed Queen of Philosophy, philosophy of science in her logical positivist garb, attempted to strangle him to death in his sleep. Although the Queen of Philosophy's then-tenuous grasp on reality kept her from finishing the throttling and killing off our old king for good, it still looked as if he was in critical, deteriorating condition. Since then, though, theology and God have been making a comeback. William Lane Craig, one of the biggest players in theology's surprising revolution, discusses the state of modern philosophy in his clear and concise essay, "The Revolution in Anglo-American Philosophy". Even if you're not interested in theology, it's a good idea to see what the big worldviews are that make up most modern philosophical thought, and see where those worldviews originated. Craig's essay nicely summarizes the situation, and I heartily recommend it.
    1

    View comments

  4. Those foolish men of centuries long past dared give theology a title that Richard Dawkins would likely sell his soul to God for: "King of the Sciences". (Seriously, with all the lip service paid to science these days, being ordained King of the Sciences would not only make you bigger than Jesus, it'd make you bigger than the Beatles.) In these latter days theology's reputation is not exactly kingly, and mainstream philosophy of science tells us that theology and science are more or less opposites.

    On the feminine side, good ol' Gauss popularized the personification of mathematics as Queen of the Sciences; but unlike theology, mathematics' reputation has remained quite saintly.

    (As a side note, economics is sometimes called the Queen of the Social Sciences. That's more of a blind-leading-the-deaf-and-blind situation.)

    Ideally the King and Queen of the Fields of Science would be happily married, maybe even popping out some Princes and Princesses to keep the dynasty up and running. But the King is childless and has been old and impotent for quite a few centuries now, and in the meantime the Queen has found a harem of courtesans to take his place. Though we hasten to admit that the resultant bastards have their own flair and fire, we must also note that they lack the regality, the sacredness, the meaningfulness, the actually-important-ness that our old King Theology so wonderfully gave us in his long-past prime. So... what's with that? Where'd theology go, and why haven't we a Prince? Whence the drawn-out Götterdämmerung? And most importantly, is there any way to end the drought? The history of ideas is complex and we'll only look at a small part of the picture here, but hopefully that part will elucidate some of why computational theology doesn't already exist, and why it clearly should.

    We start with the Greeks, natch. Let's skip Pythagoras—did you know he was a magician who could be in two places at once? the simulators will do some crazy stuff for you if you ask nicely—and dive into tall tales about Archimedes. Now, there are two super cool things about Archimedes, and they have surprisingly little to do with baths, screws, or dubiously complicated uses of the sun as a beam weapon.

    First off, the dude more or less discovered rudimentary calculus. And that's important, because calculus was one of the first ways of talking about process-like things in the abstract. "Process" means a lot of things in a lot of contexts, and despite the link to the Principia Cybernetica (God bless it) there aren't any specific technical usages of the word that are intended here—just the general commonsense idea of a process. Computation and algorithm-ness are more or less two ways of talking about processes. Computational theology, our subject here, is sorta similar to process philosophy and process theology—though hopefully it will be more rigorous, better-motivated, and ultimately more intellectually fruitful than its cousins, because restraining ourselves to computation and hypercomputation lets us see fine-grained ideas that just aren't visible when you talk about fuzzy things like "processes" or "being" or "becoming" in a very general way. Anyhow, Archimedes' calculus was an important if rudimentary step in the direction of formalizing this idea of process, which ultimately gave us a theory of computation, and a rigorous theory of computation is a prerequisite for computational theology. What's even cooler than Archimedes' calculus, though, is his computer.

    Maybe you've heard of the Antikythera mechanism? It's mind-boggling. Here's Wik's descrip:
    The Antikythera mechanism is an ancient mechanical computer designed to calculate astronomical positions. It was recovered in 1900–1901 from the Antikythera wreck, but its significance and complexity were not understood until decades later. The construction has been dated to the early 1st century BCE. Technological artifacts of similar complexity and workmanship did not reappear until the 14th century, when mechanical astronomical clocks were built in Europe. [...] The device is remarkable for the level of miniaturization and the complexity of its parts, which is comparable to that of 19th-century clocks.
    And Cicero reports that Archimedes was known to make similar mechanisms. Archimedes wrote a manuscript, now lost, titled On Sphere-Making—he might even have pioneered the field. Polymath indeed. I suspect Hero of Alexandria gained inspiration from his Library's copies of Archimedes' sundry works. But enough speculation—clearly the Greeks were doing some really neat stuff that is relevant to our interests. Not only did Archimedes give us parts of a language to talk about processes in general, he also designed an advanced computer. This combination of theoretical and mechanical ingenuity can be found in all of history's greatest thinkers about what would come to be called computation and computers.

    Sadly the Romans weren't quite as clever as the Greeks, and the decline of Greek algorithmancy led to a long Islamocentric intellectual period that I won't pretend to know anything about. Here's Jayson Virissimo from LessWrong talking about some of the hip intellectual trends of the Islamic Golden Age:
    During the Islamic Golden Age, many thinkers combined Aristotelianism and Neoplatonism with knowledge from indigenous craft traditions into a form of alchemy that was refined using logic and laboratory experimentation (Jābir ibn Hayyān is probably the most famous of these thinkers). These philosophers and technologists believed that their theoretical system would allow them to perform transmutation of matter (turn one element into another) unlocking the ability to create almost any "machine" or medicine imaginable. This was thought to allow them to create al ixir (elixir) of Al Khidr fame which, in principle, could extend human life indefinitely and cure any kind of disease. Also of great interest was the attainment of takwin, which is artificial, laboratory-created "life" (even including the intelligent kind). It was hoped (by some) that these artificial creations (called a homunculus by Latin speakers and analogous to the Jewish golem) could do the work of humans the way angels do Allah's work. Not only could these AIs do our work for us, they could continue our scientific enterprise. According to William Newman, these AIs or robots "...of the pseudo-Plato and Jabir traditions could not only talk - it could reveal the secrets of nature." Sound familiar?
    A combination of Aristotelianism and Neoplatonism, eh? That potent compound also gave us Thomism. But both Thomism and the Islamic Artificial Intelligence Initiative are subjects for another day—hopefully we can persuade Jayson Virissimo to write a guest post here to fill in our knowledge of the gap between the Greeks and the next big player from the European field, Gottfried Wilhelm von Leibniz himself.

    But we'll end this post here. Next up we'll spend an entire interlude-esque post describing the genius of Leibniz. In a post after that one we'll  tackle "Their Majesties, Part II" proper: we'll try to finish our summary of the history of computation, go over a briefer summary of the history of theology, step back to explain why the King and Queen's relationship didn't work out so well, and—if we have enough space and time—show how with the help of skillful syncretism we might mend their broken relationship, restore the King's dignity and the Queen's purity, and hopefully get the suckers to finally breed.

    0

    Add a comment

  5. We'll be making use of and talking about the Word quite often on this blog. But what is this "Word" thing?

    The Word is Logos, and that's what we'll usually call it by. Logos is reason; logic; arguments and discourse; mathematical structures; attractors in Platospace, attractors in complex systems; the laws of game theory and mechanism design; the laws of probability theory and decision theory; et cetera. Logos is also God, but we'll get to that later. Logos is higher than Dharma: Dharma is at the level of roles, archetypes, and relationships; and both Logos and Dharma are higher than the Tao, which is at the level of forces and relations. Logos is the market, Dharma is the actors, and the Tao is the trades. Logos is the water that rational agents swim in: it's the economy that rational agents deal in, and it's the language that rational agents speak in—insofar as they are rational.

    It's a conceptual metaphors thing. Everything from pebbles to frogs can be in harmony with the Tao, because the Tao is kinesthetic: it's about things not needlessly grinding up against other things: frictionless markets. Logos, the Word, requires language: only universal Turing machines can meaningfully interact with Logos. Many of us don't usually think in words, but we usually learn via symbolic information, and without words we wouldn't be able to communicate with other rational actors—so Logos can be thought of as primarily linguistic. Dharma is in-between Logos and the Tao. It can arise without language as such, but it requires community and communication. Dharma requires originalish intentionality, so I don't think individual bees have it, but I think individual chimps do. But chimps don't have Logos.

    Aristotle tells us that humans are rational animals: we are the only animals on Earth who can appreciate the Word. If there are other beings on Earth who can appreciate the Word, then they're spirits sans meat. Does silicon count as meat? We will try to answer deep philosophical questions like that one in future spirit-centric posts.

    But we're going to spend most of our Blogger-blessed kilobytes using the Word to talk about the Word, in both the Neoplatonistic-philosophical and Scholastic-theological senses. On the philosophical side, we're going to try to reason about epistemology, decision theory, computationalistic metaphysics, algorithmic probability, and all that fun stuff. Hopefully we can get really meta and talk about how the conceptual system we're using to explore conceptual systems that explore conceptual systems fares against other conceptual systems that explore conceptual systems. Shall we dub such showdowns Hofstadter Fights? I think we shall. On the theological side, we're going to try to reason about how those divers subjects are related to Saint Thomas Aquinas' and Gottfried Leibniz' Creator person. Our primary weapon on the theological front will be decision theory, which is the study of theoretical perfectly rational agents. "Perfectly rational agent" sounds a lot like God to me, natch—but we'll try to justify any apparent syncretism in later posts.

    So that's a brief overview of Logos—the Word—and a hint at how it fits into the bigger picture. What's the bigger picture? Rather, what's the biggest possible picture? Hop on board this blog, we hope to arrive at the answer space shortly.
    0

    Add a comment

Loading