PhiloComp.net

Computational & Philosophical Issues

Huge areas of study come under the general heading of Meta-Studies of Computing and Information Science: familiar topics here include Minds and Machines, Functionalism, Computability, Quantum Computing, Computer Ethics etc. But there are many other less familiar issues that are also of interest to both computer scientists and philosophers, with considerable scope for mutual benefit from joint investigation. These range from some that are concrete and practical, to others that are sufficiently abstract to rank as general "Philosophy of Computation".

Issues from Artificial Intelligence

The picture on the right shows John McCarthy, one of the founding fathers of "Artificial Intelligence" and the man who coined that name. Two of his papers are of particular interest here, Artificial Intelligence and Philosophy and Artificial Intelligence, Logic and Formalizing Common Sense. In these, he focuses on a number of conceptual issues that have traditionally occupied philosophers, but which would need to be addressed by designers of sophisticated Artificial Intelligence systems. Some examples of these, together with others, are discussed in the page on Philosophical Issues in Robot Design, which mentions Free Will and Self-Consciousness, Language, Classification, and Speech Acts, Representation of Cognitive States, Referring Expressions, Causation and Conditionals, and Robot Ethics. Philosophers as well as AI engineers can benefit greatly from this sort of cross-disciplinary linkage, because seeing how various conceptual treatments would fare in practical application can highlight their strengths and inadequacies very effectively, and reveal appropriate desiderata and constraints.

The Frame Problem

The Frame Problem is perhaps the clearest example of a theoretical difficulty which, though always present, became noticed only when AI researchers attempted to implement practical inferencing systems. The problem is to identify, when in the process of making inferences, which items of information are relevant to the inference, and which are to be ignored. If no information is ignored, then the inferencing process is likely to be directionless and disastrously inefficient. If, on the other hand, an attempt is made to restrict the information taken into account in advance of drawing inferences, then there is an obvious risk that the consequences drawn will be inadequate because uninformed. The Frame Problem is now sufficiently well known to merit its own heading in the Stanford Encyclopedia of Philosophy, which can be consulted for further discussion.

Verification, Induction, and Reliability

Most practically used computer programs are too complex to be logically verified at any reasonable expense, and hence the question arises how far they should be trusted (quite independently of the relatively well-understood physical issue regarding the reliability of the hardware on which they run). How much verification, and by what means, is required to yield justified confidence in a software product? Can induction provide a sufficient basis for assurance, and on what factors does such inductive support depend?

Complexity and Feasibility

The questions of decidability and computability introduced in the page on Hilbert, Gödel, and Turing are very familiar to philosophers of mathematics. Far less well explored are the related issues of complexity and how these should impinge on our understanding of "in principle" constraints. Many philosophical discussions appeal to the idea that certain results can be achieved "in principle", implying a relatively simple distinction between those that are logically achievable and those that are not. Computer science provides a far more sophisticated structure of complexity classes, drawing distinctions such as that between results that are achievable in "polynomial" time (and/or space) and those that require "exponential" resources. It is fairly easy, for example, to write an algorithm to play chess infallibly, but the program is not "feasible" because it would require vastly more time to run than the universe has been in existence. In many philosophical contexts, it is arguable that some such complexity-related notion of "feasibility" is more appropriate than the traditional notion of "in principle" possibility, but these issues are hitherto little explored.

Classification of Algorithms

As well as general questions about the ontology of algorithms (belonging under the meta-heading of Philosophy of Computing), there are also more practical questions about how algorithms should be classified. How many fundamental types of data and algorithmic structure should we recognise? What sorts of taxonomy are appropriate, and do these carry implications for the desirable variety of programming languages?

Simulation and Reality

When is simulation tantamount to creation? If a program reliably simulates the playing of a game of chess, for example, does that imply that it is genuinely playing chess? If so, what should we say about a program that reliably simulates intelligent judgement in other areas? Should we deem such a program genuinely intelligent?