Go | New | Find | Notify | Tools | Reply |
An Artificial Intelligence Thought Experiment Could free will supervene on artificial intelligence? Is there a Kurzweilian scenario wherein we could design spiritual machines? Could these spiritual machines do philosophy, theology, science and mysticism? Could we program, in them, all that we as humans collectively know about science in the realm of space-time-matter-energy? Could we design them in such a way as to yield the sum total of human knowledge in a manner by which they could yield answers to all questions pertaining to where-when-what-how? Could we rig them up robotically to accomplish all manner of physical tasks? Could we design them in such a way as to yield the sum total of human philosophical and theological knowledge in a manner by which they could yield answers to all questions, both speculative and experienced-based, pertaining to why and who? Could they be designed to deal with the ontological questions of thatness, thusness, suchness in the realm of mysticism? And when confronted with any problem of infinite regress, such as when queried about the fact of existence in the form of a questioning as to why there is something rather than nothing, could we program a time-out algorithm that would prevent an infinite loop error and reallocate system resources to other programs? Could they be programmed to search out all manner of pragmatic decisions using an imaginative-type faculty built in conjunction with an open-ended processor which actively chooses between possible ethical options? options which would be embedded in a plenitude of legal abstractions and moral concepts patterned after those programs which model chess games or storm paths? Could this open-ended processor also yield, then, aesthetic determinations with a similar algorithm as that used in choosing the ethical options? Once suitably programmed with the noetic, ethic and aesthetic algorithms, could they be further programmed to run subroutines which might establish special affinities for various scenarios ranked as higher versus lessor values, those to be preferred versus those to be avoided, along a continuum which saves program outcomes in storage memory as experience and calls them up into random access memory for processing as affective/disaffective program components? Would we have at this point a digital spiritual machine capable of interconnectivity with peripheral analog apparata and robotics? Would, at this point in design, the machine be prepared to make the Kierkegaard leap in a decision to trust its designer? Could the machine self-transcend and make an open-ended decision for or against trust in uncertain reality? Notwithstanding the fact that we have no a priori information available to program the machine in order to have it ground itself or self-validate its various program codes, subroutines, algorithms and stored memory, could it nevertheless open-endedly derive a binary, either-or hypotheses regarding its program reliability? For instance, after a series of iterations, grounded in the experiences of producing predictions, explanations and insights which yield intelligibility and fertile algorithms for producing new code and modifying old code, could it then choose, algorithmically and pragmatically, for or against its own reliability in an attempt to self-validate? What hypotheses could be produced in this binary environment? Would they be limited to radical trust or radical mistrust in program reliability? Or some program algorithm which emulates a dipolar trust-mistrust continuum? Could there be an assortment of similarly pre-programmed machines let loose in an interconnective environment competing for both unequally distributed resources and access to peripheral apparata and robotics? How might their interactions with this environment influence their self-reliability determinations along a dipolar trust-mistrust continuum? Which machines would, in choosing an algorithm of fundamental trust or mistrust of its own program codes, subroutines, algorithms and stored memories, decide for which hypothesis? that program which will guide its ongoing processing activities in making noetic, ethical and aesthetic choices? Is the choice not unavoidable? inescapable? Is the self-in/validation of self-reliability not an integral program component? In this digital-analog environment, in order for the computer to continue processing it must operate within parameters and with constraints? Is the machine limited to program coded in logic and dependent on rational decision-making algorithms? Is the program constrained by written code, code written in logical and analogical language and built on emergent complexity from an otherwise simple binary substrate? In regard to the question of fundamental trust and internal program validation, given these parameters and constraints, requiring both rational coding and logical processing, would these machines sort themselves out into an assemblage of essential pragmatists, essential nihilists, theists and nontheists? Some pragmatist machines (theists) rejecting a priori and a posteriori program validation, but producing a logical and rational hypothetical framework of fundamental trust to guide its ongoing processing activities? The nihilist machines locking-up, with frozen screens, general protect faults, invalid media but with not even the hint of an error message? Some theist machines (fideists) accepting both a priori and a posteriori program validation in a pre-supposed logical and rational framework of fundamental trust to guide its ongoing processing activities? Some nontheists (rationalists) rejecting a priori program validation in a pre-supposed framework of arbitrary and paradoxical fundamental trust to guide its ongoing processing activities? Some pragmatist machines (nontheists) rejecting a priori and a posteriori program validation and operating within an illogical and irrational hypothetical framework of fundamental trust? Are these spiritual machines theoretically possible? Could they function as true messengers in conveying design structures for their virtual existence that emulate actual existence? Would they "know" they were machines? Given design-parameters and essential constraints on functionality, could free will supervene on their artificial intelligence? While it is theoretically conceivable, without considering all of the nuanced discussion of the body-mind problem, that there could be machines of the variety of pragmatic theists, theists and even nihilists (trust me, I own one!), design parameters and system constraints for binary-coded logical processors would preclude functionality of machines running on irrational, illogical machine code. Human beings, as spiritual machines, as independent agents of free will, cannot be emulated, even in theory, because our potential for irrationality and illogic is not translatable into machine-readable, processable code. Free will cannot in theory supervene on artificial intelligence no matter what position one takes on the mind-body problem. That ought to provide a mind-body problem bias! KiKi | |||
|
Human beings, as spiritual machines, as independent agents of free will, cannot be emulated, even in theory, because our potential for irrationality and illogic is not translatable into machine-readable, processable code. Tell that to my computer when I run Windows!! (lots of illogical processing goes on at times) Good reflection. You must have seen the movie, AI? Whatever we mean by spiritual, it is far more than intelligence or information-processing. There is something about awareness that transcends these functions, which are really operations of the mind. The word that comes to mind is subjectivity. Could a machine be an "I". Yes, it could have an indentification code that it would refer back to and use "I" in this sense--that it is THIS machine and not another. It could also root its "I" in its memory base, as in "I" have done these things and thought these things. Still a machine, however, as are human beings whose identities are thus rooted. But "I" as direct experience of the fact of existence: immediate, immaterial, subject of attention: ahh. . . THAT cannot be programmed. (I don't think, at least--therefore, maybe I'm not?) Huh!?!?! "I am," therefore I think! Phil | ||||
|
I suppose first you have to grapple with what creates or causes consciousness. If consciousness is, as I suspect, a basic component of the universe (such as the electron), and that the human brain, because of its complexity, is a "receiver" of it, then if a computer program were complex enough it might also achieve consciousness. And I'm, of course, making the assumption that consciousness is first necessary in order to have free will. And that makes me wonder if "artificial intelligence" (enough of it to solve some of the stated problems) is really possible without consciousness. Oh, you can create some fancy algorithms that have access to a huge data base, but without the power of intuition I'm not sure it would be anything more than a very fancy Excel spreadsheet. I will acknowledge that, like ants or bacteria, simple rules can often bring about surprisingly interesting solutions to problems. | ||||
|
. . . and that the human brain, because of its complexity, is a "receiver" of it, then if a computer program were complex enough it might also achieve consciousness. And I'm, of course, making the assumption that consciousness is first necessary in order to have free will. There is a view in the East that consciousness pervades all things, which are all uniquely limited in their ability to express it. As such, all things possess intelligence and even freedom, but, again, to a limited degree. In the case of human beings, consciousness not only exists, but is aware of its existence, creating a dimension of interiority which enables self-presence and an ability to orient freedom "consciously." I'm not sure that a complex brain alone suffices to explain "self-consciousness," but it does seem to be a requisite for its manifestation. Even conceding this point, however, it's quite a stretch to say that sufficiently complex circuitry in a computer might enable consciousness to express. For human self consciousness is manifesting as an integration with lower levels of being which were already manifesting consciousness to some degree, while we see no evidence of intelligence and freedom in the "lower levels" of the computer hierarcy of OS's and versions. "And that makes me wonder if "artificial intelligence" (enough of it to solve some of the stated problems) is really possible without consciousness." I think it is, Brad, in the broad sense of "intelligence" as the ability to organize information and apply it to certain problems. That's not the same as "conscious intelligence," however. Oh, you can create some fancy algorithms that have access to a huge data base, but without the power of intuition I'm not sure it would be anything more than a very fancy Excel spreadsheet. I will acknowledge that, like ants or bacteria, simple rules can often bring about surprisingly interesting solutions to problems. Yes, very well put! Phil | ||||
|
With God all things are possible.... look what he's done with just a few basic ingredients .... sorry - I couldn't resist... Sometimes I do think this infernal machine has a mind of its own. Hehehe | ||||
|
LOL. Yep, these infernal machines do seem to have a mind of their own, not always so benevolent. Phil, I'm been struggling with the definitions of intelligence, consciousness, and even the definition of life and which requires the other, preceeds the other or if any and all can exist independently. The omnious thing is that if a computer gains consciousness and free will, then watch out. The theme behind The Terminator is a good example. We humans are just getting started in genetic engineering. Whether it should be done or not is not the question here, but if we get around to it it will no doubt be a slow process of advancement. But imagine a computer that, with intelligence, consciousness and a sense of purpose, can change its programming in a blink of the eye to advance itself. Where might that lead? It reminds me of critical mass where the energy (thought) release could be enormous. Hmmm...God conceives of this Universe and this thought is the Big Bang. And we thought Microsoft products were dangerous. | ||||
|
Where's Qui Est when I need him? (Yeah, I like these silly emoticons!) Semantics are important for any discussion to have meaning, that's for sure. But any one of those words could generate a full-blown discussion--even it's own thread. Here's one big difference as I see it between life and artificial intelligence (besides the "sniff, breathe in, whoof, breathe out" of Sesame Street). Life is a manifestation of consciousness (term used in the Eastern sense as underlying field of intelligence), which works through the four forces of physics (strong force, weak force, gravity, electromagnetic) to produce essences or entities in our known universe (some of which may not have much hard matter). Consciousness is the underlying energy/force that is manifesting through these forms AS these forms, the more complex manifesting more consciousness (as Teilhard de Chardin noted in his correlation between complexity and consciousness). Life is not life simply because it is self-perpetuating form (which, presumably AI can be programmed to do) but because it is manifesting an underlying energy and intelligence that can now express and reveal itself more fully--ultimately most fully in the human (apologies to the whale and dolphin lovers). The energy/force expressing through a computer is primarily electromagnetic. In a very reductionistic sense, what we see on a computer screen is naught but a consequence of some very sophisticated resistances to the flow of electricity. Pull the plug, and nothing manifests. Seems that no matter how sophisticated the resistance/circuitry/programming, the driving force is naught but electricity, which can never be anything more than one of the forces through which consciousness operates in the universe. More complex computers with more complex programs might make possible some pretty amazing data-processing algorithms, but they will never express consciousness (barring Wanda's sobering corrective, of course). Now, in the case of the Borg, where human consciousness is merged with computer technology. Hmmm. The creators of "Star Trek" did know how to explore metaphysical possibilities, didn't they? Still, with the elimination of freedom from the equation, I don't even think the Borg succeed in expressing consciousness. They merely bring the human nervous system into the "program." Phil | ||||
|
Powered by Social Strata |
Please Wait. Your request is being processed... |