The notions of causation and agency are deeply embedded in several fascinating science-and-religion questions: How was the world created? Does God act in the world today, and if so, how? Are persons free? Why does God not prevent evil and suffering? What is life, and when does it begin and end? What is consciousness? Some areas of scientific research that inform these questions are: Quantum Mechanics, Complex and Chaotic systems, Artificial Intelligence, and the Neurosciences. In this essay I offer a brief survey of the main issues as I understand them, looking at human agency, robotics, and then finally divine action.
Influencing all these questions is an ongoing debate over how to deal with scale. Given the recent success of methodological reductionism it's tempting to wonder if working at ever-smaller scales might be a good strategy for all problems. However, such an approach does not give satisfying accounts of several important phenomena, notably human consciousness, and our perceived ability to act as free agents. And for the religious believer, a reductionist materialist view of the world doesn't seem to have a 'causal joint' where a deity might impart inspiration, or indeed affect outcomes of any kind.
When we look at a human brain under the microscope, we don't see anything resembling the freely choosing mind we all experience. Instead, all we we see at the micro-level are mindless electro-chemical reactions.
Rene Descartes solved this problem by extending the earlier Greek idea that we are made of dual substances: matter and mind/spirit. Unfortunately for the scientist, the mind substance is not visible under the microscope. Without the presence of this mind substance, matter can only ever form machines (he considered animals machine-like). More recently, some have turned this conundrum into a thesis; if what we see under the microscope is machinery, and we detect no mind substance, then we must be machines. Our perception of freedom and consciousness is therefore an illusion. This view is widely held within the AI/Robotics community. As Rodney Brooks at the Massachusetts Institute of Technology Artificial Intelligence Lab puts it "On the one hand, I believe myself and my children all to be mere machines ... But this is not how I treat them ... Like a religious scientist I maintain two sets of inconsistent beliefs and act on each of them in different circumstances." Others point to the quantum world where machine-like behavior gives way to the probabilistic and paradoxical. Physicists John Wheeler has suggested that consciousness may be connected with quantum behavior. Kevin Sharpe joins Wheeler in wondering if consciousness could somehow affect outcomes at the quantum level.[4 Still others solve the problem of the missing mind by embedding the material world within revised metaphysical systems. Process thinkers, for example, speculate that matter possesses a mental-like property that is undetectable in simple systems. Followers of Bohm see consciousness as an aspect of the underlying 'implicate order', and our own consciousness as sharing in that.
In the last few decades it has become apparent to some thinkers that it is possible to overuse a reductive approach. If we are willing to leave the micro world and consider more complex scenarios, new properties emerge that cannot be fully explained in terms of the components in isolation. Such an approach allows us to speculate that freedom, mind, and consciousness are in fact 'real' and open to scientific exploration. But many are resistant.
As we leave the micro world with its Newtonian simplicity and predictability, and start to deal with the world of complexes, wholes, and probabilistic descriptions, we'll need to work hard to make claims that are as free from subjectivity as possible.
It seems to me that an objective attempt to locate agency as an emergent property will benefit from the following kinds of investigation. First, assessing the degrees of spatio-mechanical freedom between components in a system (and also between each component and their internal subcomponents) and secondly, the redistribution of energy within the system over time.
If part of a system is rigidly connected to others, then we can immediately relocate our search for causes to a higher level. Consider a system of three equal length struts loosely jointed to form a triangle. Any perturbation of the triangle will be immediately and entirely passed on to the whole. It is rigid by definition. This is not the case with a system of four struts. If we perturb one corner, this necessarily moves three struts, but not the fourth. We end up with a different trapezoid. Unlike the triangle, the trapezoidal system can experience changes/causes within itself because it has an internal degree of freedom. We can predict the state of the triangle into the future because it is stable, but we cannot do so with the trapezoid because we may find it in an infinite number of equally possible states. As we continue into three dimensions, we can conceive of systems with various other kinds of freedom: Some which are perfectly rigid, some with 'universal joints', and some that are perfectly free to move and occupy any point in space.
Clearly this is a crude analogy, but I believe this kind of analysis hints at the following: It's possible to classify systems according to the kinds and quantities of spatio-mechanical freedom exhibited by their components. We can then locate various degrees of emergent causation at the joints that enable this freedom. Rather than saying that systems experience 'bottom-up causation', or 'top-down/whole-part influence', we could produce a graph showing the distribution of causal efficacy as it varies across the components and joints that make up the system. I like the term 'system causation' or 'network causation' to describe such a view because it acknowledges a simultaneous causal role at many scales from the unitary to the aggregate to the complex.
A common reductive worldview claims that only atoms and molecules are real, and all causes originate with biochemical reactions. But I think assessments of the kind mentioned above will allow us to claim that there are kinds of structures (and regions of scale) that deserve causal (and therefore ontological) priority in our descriptions. The conclusion will be to establish that the human self (as emergent from physical brain/body states) is a 'real' agent in the world, just as our intuition suggests. I follow Peacocke in claiming that "Real entities have effects and play irreducible causal roles in an adequate explanation of the world." Furthermore, the difference between the trapezoid and the triangle may be the seed from which a logical account of free agency can grow. This is not to suggest that atoms, for example, are not causally effective and real too, just that their status as real should be plotted on the same axis as the human self.
What would a list of systems ordered by 'causal priority' look like? At the bottom would be inanimate matter - a pebble, for example, with no internal degrees of freedom. If a person were included in the scenario, they may use the pebble for some purpose, and any effect the pebble had would be traced back to the human agent, making the pebble a 'tool'. As we add complexity to tools they fall into another category: machines. Machines have moving parts with joints, and the parts can be in various configurations. An example of something in the machine category would be a loom. Beyond tools and machines we might break out 'autonomous machines' such as steam engines, and then 'intelligent machines' such as electronic computers.
If we define the term 'machine' to mean systems where causes can be traced outside, to programmers, or operators, etc., then it seems to me that the most intelligent devices we have made thus far - digital computers - are still machines because they are deterministic - we can always trace causes to the outside. Using this kind of distinction, there is a huge gulf between even lower animals and our most powerful computers.
If we are searching for systems that clearly start causal chains rather than just acting as players in them, we need to find more than machines, we need to find free agency. What is the 'right stuff' that makes agency? I believe we have located agency when causal chains disappear (for all practical purposes) inside complex objects. Many biological systems are extremely complex, but effects can still be traced to external causes. They will only disappear into objects that possess sufficient complexity and internal degrees of freedom that our instruments and measurements cannot reliably follow. As we leave the realm of machines, we encounter what might be termed 'beings.' Of course, the example par excellence of a complex system with a great many degrees of freedom is the human brain/body.
Some might say this is a cheap trick. It can be argued that such a search for agency will only stick at minds and other high-level systems for as long as we remain ignorant as to the functioning of lower level parts of such complex systems, as is currently the case with the human brain. Perhaps with more knowledge we'll find that human agency at the conscious level is just an illusion, and that it should properly be located at lower levels, perhaps eventually sliding all the way down to nothing but physics and chemistry, but I doubt it.
Opinion is wildly divided on how to account for the self-reflective consciousness that we humans experience. On the one hand, some see the rapid advances in the neurosciences and extrapolate to a future where consciousness is also explained in neurobiological terms. The emergent-monism/non-reductive physicalism of Clayton, Murphy and Peacocke head in this direction. Others see the problem as intractable. Russell Stannard sees no easy way to reconcile our experience of self and the passage of mental-time with the block universe model of space-time. Keith Ward finds arguments for the immaterial nature of mental images to be persuasive, and given this, the logical possibility that human agency may also be rooted in the immaterial. I myself am undecided, but guardedly optimistic about future emergent-physicalist attempts to account for both mental images and subjective experience/qualia.
While the topic of consciousness invariably shows up in conversations about human agency and causation, a particular commitment on the nature of consciousness does not directly affect my discussion of agency. With or without an additional immaterial mind, I believe that a purely physicalist account of agency will still show human minds to be very significant agents in the world.
But minds do not act in isolation. When describing real-life scenarios involving people, we should probably expect to list a plethora of inter-related agents at various scales, from atomic, to genetic, to person, to group, to ecosystem, with at best fuzzy boundaries between them. But if we were to rank each of them by the quantity of effects traceable to each, I think the conscious human person would invariably be at the top of the list.
Historically, if we encountered something that moved toward us, or did much at all, it was safe to assume it was an agent. The more human-like the behavior, the more agency we were encountering. As Brian Cantwell-Smith has noted, until recently if anything spoke to us, we could hope to take it home for dinner. But these days anything from our cars, to computers, or artificial intelligences might try and strike up a conversation with us. Today robots and intelligent devices are able to exhibit all kinds of behaviors that make them look as though they are agents, while we can be assured that by my definition above, they are merely machines.
An example of behavior that exposes the problem is Chess playing. This has traditionally been associated with high human intelligence, and there's no doubt that IBM's chess-playing system 'Deep Blue' that in 1997 beat the world champion, Kasparov, was a magnificent technical achievement. As might have been expected, the media reported that a brave new era of artificial intelligence had begun. But the researchers themselves saw things differently. According to Senior Manager, Chung-Jen Tan, "This chess project is not AI", and Joseph Hoane, "The techniques that tried to mimic human judgment failed miserably. We still don't know how to do that at all."
So it would be a mistake to see Deep Blue as performing human-like judgment, even though it can outperform human judgment when applied to the same task. More importantly, Deep Blue is no more an agent in the world than an everyday PC.
I believe I can make this categorical statement about the lack of capacity for agency in Deep Blue and comparable systems, because they are entirely, or at core, digital. Since they are digital, we know that they are fully deterministic; there is no behavior they can exhibit that is not reliably traceable, after the fact, to some external source. The source could be the programmer, or a stimulus from the environment, or a combination, but is always external to the digital system.
However, robotic agency is far from a simple issue. In recent years, some members of the AI research community have been pursuing directions other than conventional symbolic/'Strong AI', focusing instead on embodied intelligence, or 'situated AI'. Here, the objective is to create robots whose intelligence is a result of their physicality and environment (while also having a computational component).
Rodney Brooks heads the AI Lab at MIT, a pioneering center in embodied robot research. According to Brooks, by 2020 or so we will share the planet with robots that have emotions, desires, love and pride. One of their early successes was 'Genghis', an insect-like creature with six legs and compound eyes. Genghis eyes and legs are the inputs and outputs for simple behaviors such as 'chase'R, 'stand-up', 'walk over obstacles'. But when combined together in one body, cued by stimuli from the environment, the result is a robot that behaves like many insect predators we encounter in nature. Brooks describes it as having a wasp-like personality. Importantly, this was achieved with no central cognition. When independent observers witness Genghis, they can't help but describe Genghis' actions in terms of novel emergent behaviors for which Genghis has no programming or physical correlates. A reductive approach would deny this claim; if there are no correlates then the behaviors must be illusory. Even if we do not approach Genghis through the lens of reductionism there is a chance that our perception of its 'personality' is something we are projecting onto what we see based on our prior experience.
Lets turn to another example. In this case the robot body has four wheels and a light sensor on the front that supplies power to the rear wheels in proportion to the light level it receives. (There is no digital computation on this particular robot.)
If you can run Java applications in your browser, here is a link to a software simulation of this robot: http://www.counterbalance.net/robot/robot.html (Note: on some computers this can take several seconds to initialize.)
With [Mode 1] selected, click [Start]. The robot - depicted as a Bumble Bee - will move forwards stimulated by the light.
It will go forward while there is light in the room falling on its sensor and stop when it has driven far from the light. This is machine-like behavior, and not at all intelligent.
However, if we add a second light-sensor, so there are now two where the Bee's antennae would be, and if we wire the left sensor to the right wheel, and the right sensor to the left wheel, what will happen? Just like a moth to flame, the robot will drive to the light from wherever it is in the room. If it incorrectly veers to the left, the right light sensor, seeing more light due to the angle, will automatically send more power to the left wheel and put it back on course. This self-correction occurs until it finds its goal. If the light is then moved, it will follow. The brighter the light, the more vigorously it will seek it.
To see this behavior in the simulation click [Mode 2] and then [Start].
By choosing [Mode 3] the robot finds itself in a field of many lights, and will automatically traverse the set, finding an efficient path amongst them all until the last one has been found.
How should we describe the behavior of such a robot? I think we must call it light-seeking. Importantly, the robot responds in time, with what seems to be a beginning point, a period of trial and error, and a goal temporarily achieved, before potentially restarting its search if the light is moved. But there are entirely no programming or physical correlates to this temporally based behavior. A hardcore reductive approach would insist on describing this robot in terms of a dual set of light-sensors, motors and drive wheels. The light-seeking behavior would not be apparent if we reduce it to its parts and consider them in isolation. Since light-seeking functions are not apparent in the parts, any claim that it functions this way would be considered mistaken. I think this is obviously false. On the other hand, if the emergent behavior is real, would it be correct to say this robot has a goal - a telos - and an inbuilt disposition to achieve that goal, i.e. an 'intention'? It certainly behaves as if it does...
While Brooks is more than sympathetic to the reality of emergent behaviors, he believes that designing human-like robots will turn out to be relatively easy because "we are machines," "... nothing more than a highly ordered collection of biomolecules." I believe he comes to this conclusion by extrapolating his key insight that led to the success with Genghis and which continues with the Cog and Kismet projects. The insight was: leave out cognition. Prior to Brooks' work, the vast majority of AI researchers were trying to develop computer programs that mimicked human-like cognitive processes, and robots that used these kinds of programs to control parts of the robot by maintaining a high-fidelity software model of the robots state and the world around it. This turned out to be significantly harder than expected. Meanwhile, Brooks decided to see how far he could get by building robots equipped with just basic responses to their environment, and explicitly leaving out any large cognition feature. The answer was: surprisingly far.
Having had significant success to date by sidestepping the cognition problem, Rodney Brooks is ready to say that all cognition is unnecessary, regardless of the behaviors we hope to build into robots. He considers behaviors that we associate with cognition to be simply epiphenomena and is explicitly not working on the problem. In fact, a key 'problem' in his work has become convincing humans that they are machines... As he sees it, "all of us overanthropomorphise humans, who are after all mere machines" and "all arguments that robots wont have emotions boil down to arguments that we are not machines."
With this view of human nature in place, he is able to claim that Artificial Intelligence research has produced a robot comparable to HAL 9000 from 2001: A Space Odyssey. He considers this milestone to have been passed onMay 9, 2000, when Cynthia Breazeal defended her thesis on Kismet, a robot designed for social interaction. According to Brooks, "Kismet is alive. Or may as well be. People treat it that way." Of course, he is fully aware of the limitations of current robotics. For example, Kismet can vocalize but cannot say words, and only hears prosody, but he does not see this as "an impediment to a good conversation." But in the final analysis he admits "... we do not stay fooled for long. We treat our robots more like dust mites than like dogs or people. For us the emotions are not real."
I'm deeply impressed by the work of Brooks and the AI Lab, but I would agree with his own assessment that the work to date has limitations. Because the robots are embodied and don't rely on digital technology, they are certainly agents where computer programs are not. But the degrees of complexity and internal freedom that we see in the next few generations of robotics will surely remain a far cry from biological systems. Personally, I suspect robotics will need to borrow techniques from biology, and even achieve similar levels of complexity before we will meet Commander Data from Star Trek. Unfortunately, as Brooks himself notes, biomolecularbehavior is "incredibly expensive to compute."
When Oxford theologian Keith Ward was asked in an interview if he would baptise a robot, he gave what I consider a very profound and helpful answer. His reply: "If it asked properly." Lets unpack this: First, for 'it' to ask at all, we would have to be convinced that it was an agent. (If we could trace the question to programming provided from the outside, it would no longer be a valid request.) For it to genuinely 'ask' would require it to possess rich notions of intentionality and relationality. And finally, for it to ask 'properly' would entail us first deciding how we would tell if a human were to ask improperly, and then try and apply those criteria to the robot too. Presumably, in order for a robot to formulate a convincing 'proper' explanation of why it wished to be baptised, it would be able to express it's understanding of a transcendent reality. A robot capable of doing this would certainly have my attention!
Our experiences in the natural world seem to fall neatly into two categories: those that seem law-like in regularity, and those that seem due to chance. There doesn't seem to be a pressing need for Divine action as a distinct third category. This is especially true if we limit our observations to classical levels of complexity and scale - here we detect a robust determinism with strict observance of conservation laws. On the other hand, our observations at the quantum scale lead us to the reasoned conclusion that the future is in some limited sense open, or "not decided" (to quote Bohr). Here even conservation laws can be bent, at least temporarily. Below I shall survey a few ways in which Divine action can potentially be included in a scientific framework that deals primarily in terms of necessity and chance.
Many commentators see downward causation as a way to account for the manner in which God causes events in the world. This begins by recognizing the limitations that stem from seeing causation occurring only at the micro-scale within a strictly reductionist perspective of nature. While I agree a strict reductionism is hard to support for all kinds of reasons, I follow Barbour in expressing reservations with downward causation as an explanation for Divine agency. The trouble, as I see it, is examples of downward causation in nature have an identifiable 'top', and the energy distribution that occurs during the event being caused can be traced throughout the system. For example, in the case of a piston heating a gas volume, the top is the piston (or the operator pushing it depending on how the scenario is set up) and the effect is the increased velocity of the gas molecules. In the case of the Universe, I'm not sure what to call the 'top' from where the causal chain would begin. I also don't see a physical connection from a 'top' to all the places God might act, along which we might observe energy redistribution.
In the case of minds affecting bodies - say by lifting an arm - there is a readily observable interplay between brain states, nerve fibres, muscle states, and molecular states, all of which interact as the arm moves. If this is to serve as a model for Divine action we must be able to reasonably describe the Universe as God's body. I'm really not sure how to do that. A body is a body due to the complex causal relationships among its parts. It seems to me that the universe-at-large lacks anything like such relationships. It's more like a gas than a body.
Even if we were to proceed with the analogy, I'm not sure I'm happy with the implications. The control that I exhibit on most parts of my body is so imprecise as to be negligible, and at some scales is zero. I can affect blood pressure, but cannot affect individual blood cells. Sharpe makes use of the 'blunt' nature of downward causation to account for the minimal ways in which we can see the Divine acting at the level of human experience; as in 'trickle-down economics' the lower levels may not see the effects originating at higher levels. (This prompts the question: at what levels should we expect to see the Divine acting maximally?) Barbour suggests that God would not need the analogue of a nervous system because of omnipresent connections to all that is. And Peacocke also clearly states, "Of course, this network of events is not identical with God and is not God's body, for it is not in any sense a 'part' of God as such." But it seems to me that if we don't have some kind of causal network to observe, then we don't have downward causation. What we have instead is pure immanence. While many thinkers, including Newton, have wondered if the Mind-Body interaction was a useful analogy for the God-World interaction, it seems to me that the problems are increasing in number.
Barbour, Peacocke and Sharpe make use of the idea that God interacts with the world via the 'communication of information.' Such a notion is seen as fertile ground because conservation laws need not be violated - a perennial problem for accounts of Divine action. It's common to see the triplet of "matter, energy and information" listed as the basic units of reality, and we often think of information as somehow disconnected from the other two and not subject to the same laws. However, it seems to me that information is always and only realised in physical states. When found in such a triplet, I believe information is a synonym for the pattern, organization, or structure of matter/energy. Certainly it deserves to be elevated up with the other two, but the same laws bind all three. As I understand it, in order for God to "input information", matter/energy must be reorganized - by definition. It has been suggested that since God is omnipresent, no energy is required for such communication,, but I don't see how this helps. Sharpe sees nonlocality as a means to impart the information without disrupting conservation, but the universe still needs to found in a state that is different from what we had expected, if we are to then claim that God was objectively effective temporally.
I should say that both Peacocke and Barbour are careful to state that information is only ever realised in physical states during coding, transmission, and decoding, and that it should not be seen "in purely static terms, as if the message where the pattern itself."Peacocke adds; "No information flows without some exchange of energy and/or matter." I agree. If this is acknowledged, I don't think it is entirely fair to present the 'communication of information' route for Divine action as uniquely immune to interventionism critiques.
We sometimes use the word 'information' to mean quite different things. Sometimes we mean organization or structure - this is always physically instantiated. Sometimes we mean the input to an information-processing system. This input can vary from an unambiguously rich and clear signal, to pure noise. Once again, this input is physically instantiated. Finally, we sometimes mean an abstract concept as in 'the BRCA1 gene'. In this case we recognise that information as structure can be generalized and given a symbolic representation. Information in this last sense is 'multiply realisable', as is the case for computer languages and human languages to some extent. When discussing information, its very helpful to specify in what sense we are using the term.
There are aspects of our world that we believe to be - for all practical purposes - unpredictable, namely, quantum, chaotic, and very complex systems. As such, the possibility of Divine action in these systems is hard to rule out, but just as hard to account for in convincing ways. Quantum indeterminacy has been offered as a way for God to communicate information to the Universe. This would purportedly allow God to act from the "bottom up." Peacocke has criticized the idea that God changes quantum events because of the need to manipulate an "absurdly large" number of events to ensure the behavior remains deterministic at macro scales. I don't find this to be a harsh criticism. How could we know what's too large or 'conveniently small' for God?
While scientific accounts of Divine action are today problematic, it seems we can also say that for all practical purposes the future state of the Universe appears to be genuinely open, and as yet undecided. While there are some scenarios where any change would seem to require a violation of conservation laws, with complex systems that have many degrees of nested spatio-mechanical freedom (e.g. animal bodies and brains) or are tied to quantum indeterminacy, there are different future states that are equally likely. Importantly, they all balance the books of energy conservation, so there is scope for God to bring about one particular outcome in preference to another without breaking the laws of physics. Often the choice of one outcome over another would be difficult or impossible to detect, but we can envision certain scenarios that would be quite provocative. For example, if a Geiger counter were placed near a radio-isotope it will emit clicks as it detects decaying particles from the radio source. The time between each click is entirely unpredictable - a direct consequence of quantum indeterminacy - so any series of clicks is possible. We can easily program a computer to translate the time delay between clicks into letters of the alphabet. Such a system would now type letters.
We would be right to expect gibberish. But an equally possible outcome is that the system starts typing English sentences. If it were to type This is a demonstration of Divine action then that would certainly be a big help to continuing discussion of this difficult topic.
 Who could disagree with Arthur Peacocke in that we "...cannot avoid arriving at a view of matter that sees it manifesting mental, personal and spiritual activities. See Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 147
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 174
 Kevin Sharpe, Sleuthing the Divine: The Nexus of Science and Spirit (Minneapolis: Fortress, 2000): 97
 See The Princeton Engineering Research Lab at http://www.princeton.edu/~pear/
 Ian Barbour, When Science Meets Religion (New York: HarperCollins, 2000): 148
 Kevin Sharpe, Sleuthing the Divine: The Nexus of Science and Spirit (Minneapolis: Fortress, 2000): 83
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 49
 After all, an awful lot that goes on in the world is deeply influenced by the making and breaking of chemical bonds.
 With no agents, it seems to me we are left with a full-blown deterministic, atemporal, block-universe description, which I really don't know how to deal with. But thats not to say this view of reality does not have advocates. Russell Stannard and Chris Isham are two.
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 5
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 46
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 50
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 172
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 173
 Curiously, he considers cognition to exist in the mind of the observer, rather than describing at as an emergent property of the robot itself.
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 39
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 175
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 176
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 64
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 95
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 158
 Rodney Brooks, Flesh and Machines (Pantheon Books, 2002): 190
 Kevin Sharpe, Sleuthing the Divine: The Nexus of Science and Spirit (Minneapolis: Fortress, 2000): 52
 Ian Barbour, When Science Meets Religion (New York: HarperCollins, 2000): 172
 I don't quite see how God can serve this function in a way that's at all analogous to physical systems.
 Ian Barbour, When Science Meets Religion (New York: HarperCollins, 2000): 173
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 108
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 110
 Kevin Sharpe, Sleuthing the Divine: The Nexus of Science and Spirit (Minneapolis: Fortress, 2000): 68
 Ian Barbour, When Science Meets Religion (New York: HarperCollins, 2000): 174
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 58
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 120
 Ian Barbour, When Science Meets Religion (New York: HarperCollins, 2000): 166
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 122
 Kevin Sharpe, Sleuthing the Divine: The Nexus of Science and Spirit (Minneapolis: Fortress, 2000): 54,55
 Ian Barbour, When Science Meets Religion (New York: HarperCollins, 2000): 106
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 53
 Arthur Peacocke, Paths from Science towards God: The End of all our Exploring (Oxford: OneWorld, 2001): 106