Awareness is the ability to perceive, to feel, or to be conscious of events, objects, thoughts, emotions, or sensory patterns.[1] In this level of consciousness, sense data can be confirmed by an observer without necessarily implying understanding. More broadly, it is the state or quality of being aware of something. In biological psychology, awareness is defined as a human's or an animal's perception and cognitive reaction to a condition or event.
Awareness is a relative concept. An animal may be partially aware, may be subconsciously aware, or may be acutely unaware of an event. Awareness may be focused on an internal state, such as a visceral feeling, or on external events by way of sensory perception. Awareness provides the raw material from which animals develop qualia, or subjectiveideas about their experience. Insects have awareness that you are trying to swat them or chase after them. But insects do not have consciousness in the usual sense, because they lack the brain capacity for thought and understanding.
Popular ideas about consciousness suggest the phenomenon describes a condition of being aware of one's awareness or, self-awareness.[2] Efforts to describe consciousness in neurological terms have focused on describing networks in the brain that develop awareness of the qualia developed by other networks.[3]
Neural systems that regulate attention serve to attenuate awareness among complex animals whose central and peripheral nervous system provides more information than cognitive areas of the brain can assimilate. Within an attenuated system of awareness, a mind might be aware of much more than is being contemplated in a focused extended consciousness.
Basic awareness of one's internal and external world depends on the brain stem. Bjorn Merker,[4] an independent neuroscientist in Stockholm, Sweden, argues that the brain stem supports an elementary form of conscious thought in infants with hydranencephaly. "Higher" forms of awareness including self-awareness require cortical contributions, but "primary consciousness" or "basic awareness" as an ability to integrate sensations from the environment with one's immediate goals and feelings in order to guide behavior, springs from the brain stem which human beings share with most of the vertebrates. Psychologist Carroll Izard emphasizes that this form of primary consciousness consists of the capacity to generate emotions and an awareness of one's surroundings, but not an ability to talk about what one has experienced. In the same way, people can become conscious of a feeling that they can't label or describe, a phenomenon that's especially common in pre-verbal infants.
Down the brain stem lie interconnected regions that regulate the direction of eye gaze and organize decisions about what to do next, such as reaching for a piece of food or pursuing a potential mate.[citation needed]
The ability to consciously detect an image when presented at near-threshold stimulus varies across presentations. One factor is "baseline shifts" due to top down attention that modulates ongoing brain activity in sensory cortex areas that affects the neural processing of subsequent perceptual judgments.[5] Such top down biasing can occur through two distinct processes: an attention driven baseline shift in the alpha waves, and a decision bias reflected in gamma waves.[6]
Living systems are cognitive systems, and living as a process is a process of cognition. This statement is valid for all organisms, with or without a nervous system.[7]
+
+
This theory contributes a perspective that cognition is a process present at organic levels that we don't usually consider to be aware. Given the possible relationship between awareness and cognition, and consciousness, this theory contributes an interesting perspective in the philosophical and scientific dialogue of awareness and living systems theory.
In cooperative settings, awareness is a term used to denote “knowledge created through the interaction of an agent and its environment — in simple terms ‘knowing what is going on’”.[8] In this setting, awareness is meant to convey how individuals monitor and perceive the information surrounding their colleagues and the environment they are in. This information is incredibly useful and critical to the performance and success of collaborations.[9][10] Awareness can be further defined by breaking it down into a set of characteristics:[11]
+
+
It is knowledge about the state of some environment
+
Environments are continually changing, therefore awareness knowledge must be constantly maintained
+
Individuals interact with the environment, and maintenance of awareness is accomplished through this interaction.
+
It is generally part of some other activity – generally making it a secondary goal to the primary goal of the activity.
+
+
Different categories of awareness have been suggested based on the type of information being obtained or maintained:[12]
+
+
Informal awareness is the sense of who’s around and what are the up to. E.g. Information you might know from being collocated with an individual
+
Social awareness is the information you maintain about a social or conversational context. This is a subtle awareness maintained through non-verbal cues, such as eye contact, facial express, etc.
+
Group-structural awareness is the knowledge of others roles, responsibilities, status in a group. It is an understanding of group dynamics and the relationship another individual has to the group.
+
Workspace awareness – this is a focus on the workspace’s influence and mediation of awareness information, particularly the location, activity, and changes of elements within the workspace.
+
+
These categories are not mutually exclusive, as there can be significant overlap in what a particular type of awareness might be considered. Rather, these categories serve to help understand what knowledge might be conveyed by a particular type of awareness or how that knowledge might be conveyed. Workspace awareness is of particular interest to the CSCW community, due to the transition of workspaces from physical to virtual environments.
+
While the type of awareness above refers to knowledge a person might need in a particular situation, context awareness and location awareness refer to information a computer system might need in a particular situation. These concepts of large importance especially for AAA (authentication, authorization, accounting) applications.
+
The term of location awareness still is gaining momentum with the growth of ubiquitous computing. First defined with networked work positions (network location awareness), it has been extended to mobile phones and other mobile communicable entities. The term covers a common interest in whereabouts of remote entities, especially individuals and their cohesion in operation. The term of context awareness is a superset including the concept of location awareness. It extends the awareness to context features of an operational target as well as to context of an operational area.
Covert awareness is the knowledge of something without knowing it. Some patients with specific brain damage are for example unable to tell if a pencil is horizontal or vertical.[citation needed] They are however able to grab the pencil, using the correct orientation of the hand and wrist. This condition implies that some of the knowledge the mind possesses is delivered through alternate channels than conscious intent.[original research?]
Awareness forms a basic concept of the theory and practice of Gestalt therapy.
+
In general, "awareness" may also refer to public or common knowledge or understanding about a social, scientific, or political issue, and hence many movements try to foster "awareness" of a given subject, that is, "raising awareness". Examples include AIDS awareness and Multicultural awareness.
^Wyart, V.; Tallon-Baudry, C. (July 2009). "How Ongoing Fluctuations in Human Visual Cortex Predict Perceptual Awareness: Baseline Shift versus Decision Bias". Journal of Neuroscience29 (27): 8715–8725. doi:10.1523/JNEUROSCI.0962-09.2009. PMID19587278.
+
^Capra, Fritjof (1996). The Web of Life: A New Scientific Understanding of Living Systems. Garden City, N.Y: Anchor Books. ISBN0-385-47676-0.
+
^Gutwin, Carl; Greenberg, Saul (September 2002). "A Descriptive Framework of Workspace Awareness for Real-Time Groupware". Computer Supported Cooperative Work (CSCW)11 (3-4): 411–446. doi:10.1023/A:1021271517844.|access-date= requires |url= (help)
+
^Dourish, Paul; Belloti, Victoria (1992). "Awareness and Coordination in Shared Workspaces". Computer Supported Cooperative Work (November): 107–114. doi:10.1145/143457.143468.|access-date= requires |url= (help)
+
^Schmidt, Kjeld (2002). "The problem with 'awareness': Introductory remarks on 'awareness in CSCW'". Computer Supported Cooperative Work11 (3-4): 285–298. doi:10.1023/A:1021272909573.|access-date= requires |url= (help)
+
^Gutwin, Carl; Greedberg, Saul (1999). A framework of awareness for small groups in sharedworkspace groupware (Technical Report 99-1 ed.). University of Saskatchewan, Canada: Department of Computer Science.|access-date= requires |url= (help)
+
^Greenberg, Saul; Gutwin, Carl; Cockburn, Andy (1996). "Awareness Through Fisheye Views in Relaxed-WYSIWIS Groupware". Proceedings of the conference on Graphics interface '96: 28–38.|access-date= requires |url= (help)
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations.
+
+
+
Computer science is the scientific and practical approach to computation and its applications. It is the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures (or algorithms) that underlie the acquisition, representation, processing, storage, communication of, and access to information. An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems.[1]
+Charles Babbage is credited with inventing the first mechanical computer.
+
+
+
+
+
+
+Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.
+
+
+
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. The ancient Sanskrit treatise Shulba Sutras, or "Rules of the Chord", is a book of algorithms written in 800 BC for constructing geometric objects like altars using a peg and chord, an early precursor of the modern field of computational geometry.
+
Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[2] In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner.[3] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[4] He started developing this machine in 1834 and "in less than two years he had sketched out many of the salient features of the modern computer".[5] "A crucial step was the adoption of a punched card system derived from the Jacquard loom"[5] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first computer program.[6] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[7] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[8]
+
During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[9] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[10][11] The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of CambridgeComputer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.[12] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
+
Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[13][14] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704[15] and later the IBM 709[16] computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating […] if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".[13] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.[14]
+
Time has seen significant improvements in the usability and effectiveness of computing technology.[17] Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of human aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.
+The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[18]
+
+
+
Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is a founding science of the current epoch of human history called the Information Age and a driver of the Information Revolution, seen as the third major leap in human technological progress after the Industrial Revolution (1750–1850 CE) and the Agricultural Revolution (8000–5000 BC).
Simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.[citation needed]
+
Artificial intelligence is becoming increasingly important as it gets more efficient and complex. There are many applications of AI, some of which can be seen at home, such as robotic vacuum cleaners. It is also present in video games and on the modern battlefield in drones, anti-missile systems, and squad support robots.
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[26]Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[27] Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[28]
Although first proposed in 1956,[14] the term "computer science" appears in a 1959 article in Communications of the ACM,[29] in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921,[30] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[29] His efforts, and those of others such as numerical analystGeorge Forsythe, were rewarded: universities went on to create such programs, starting with Purdue in 1962.[31] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[32] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy,[33] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a distinct field of data analysis, including statistics and databases.
+
Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[34] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[35] The term computics has also been suggested.[36] In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[37]
+
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes."[note 3] The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, biology, statistics, and logic.
+
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[10] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.[14]
+
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined.[38]David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[39]
+
The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[40][41]CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)[42]—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[40]
The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.
According to Peter Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[10] Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
Information theory is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[44] Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Artificial intelligence (AI) aims to or is required to synthesise goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting-point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[45] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.
Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[46]
Computer graphics is the study of digital visual contents, and involves synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. A distributed system extends the idea of concurrency onto multiple computers connected through a network. Computers within the same distributed system have their own private memory, and information is often exchanged among themselves to achieve a common goal.
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.
Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers and computer systems software engineers are projected to be among the fastest growing occupations from 2008 to 2018.
All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything".
+
+
+
+
+
Only three rules are needed to combine any set of basic instructions into more complex ones:
+
+
sequence: first do this, then do that;
+
selection: IF such-and-such is the case, THEN do this, ELSE do that;
+
repetition: WHILE such-and-such is the case DO this.
+
+
+
Note that the three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications.[48][49] One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.[50]
Since computer science is a relatively new field, it is not as widely taught in schools and universities as other academic subjects. For example, in 2014, Code.org estimated that only 10 percent of high schools in the United States offered computer science education.[51] A 2010 report by Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA) revealed that only 14 out of 50 states have adopted significant education standards for high school computer science.[52] However, computer science education is growing. Some countries, such as Israel, New Zealand and South Korea, have already included computer science in their respective national secondary education curriculum.[53][54] Several countries are following suit.[55]
+
In most countries, there is a significant gender gap in computer science education. For example, in the US about 20% of computer science degrees in 2012 were conferred to women.[56] This gender gap also exists in other Western countries.[57] However, in some parts of the world, the gap is small or nonexistent. In 2011, approximately half of all computer science degrees in Malaysia were conferred to women.[58] In 2001, women made up 54.5% of computer science graduates in Guyana.[57]
^"The introduction of punched cards into the new engine was important not only as a more convenient form of control than the drums, or because programs could now be of unlimited extent, and could be stored and repeated without the danger of introducing errors in setting the machine by hand; it was important also because it served to crystallize Babbage's feeling that he had invented something really new, something much more than a sophisticated calculating machine." Bruce Collier, 1970
+
^See the entry "Computer science" on Wikiquote for the history of this quotation.
^"In this sense Aiken needed IBM, whose technology included the use of punched cards, the accumulation of numerical data, and the transfer of numerical data from one register to another", Bernard Cohen, p.44 (2000)
^Abelson, H.; G.J. Sussman with J. Sussman (1996). Structure and Interpretation of Computer Programs (2nd ed.). MIT Press. ISBN0-262-01153-0. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology — the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects.
^Wegner, P. (October 13–15, 1976). Research paradigms in computer science—Proceedings of the 2nd international Conference on Software Engineering. San Francisco, California, United States: IEEE Computer Society Press, Los Alamitos, CA.
+
^Denning, P. J.; Comer, D. E.; Gries, D.; Mulder, M. C.; Tucker, A.; Turner, A. J.; Young, P. R. (Jan 1989). "Computing as a discipline". Communications of the ACM32: 9–23. doi:10.1145/63238.63239.
^ abLouis Fine (1959). "The Role of the University in Computers, Data Processing, and Related Fields". Communications of the ACM2 (9): 7–14. doi:10.1145/368424.368427.
^P. Mounier-Kuhn, L'Informatique en France, de la seconde guerre mondiale au Plan Calcul. L'émergence d'une science, Paris, PUPS, 2010, ch. 3 & 4.
+
^Tedre, M. (2011). "Computing as a Science: A Survey of Competing Viewpoints". Minds and Machines21 (3): 361–387. doi:10.1007/s11023-011-9240-4.
+
^Parnas, D. L. (1998). "Software engineering programmes are not computer science programmes". Annals of Software Engineering6: 19–37. doi:10.1023/A:1018949113292., p. 19: "Rather than treat software engineering as a subfield of computer science, I treat it as an element of the set, Civil Engineering, Mechanical Engineering, Chemical Engineering, Electrical Engineering, […]"
^Meyer, Bertrand (April 2009). "Viewpoint: Research evaluation for computer science". Communications of the ACM25 (4): 31–34. doi:10.1145/1498765.1498780.
"Within more than 70 chapters, every one new or significantly revised, one can find any kind of information and references about computer science one can imagine. […] all in all, there is absolute nothing about Computer Science that can not be found in the 2.5 kilogram-encyclopaedia with its 110 survey articles […]." (Christoph Meinel, Zentralblatt MATH)
"[…] this set is the most unique and possibly the most useful to the [theoretical computer science] community, in support both of teaching and research […]. The books can be used by anyone wanting simply to gain an understanding of one of these areas, or by someone desiring to be in research in a topic, or by instructors wishing to find timely information on a subject they are teaching outside their major areas of expertise." (Rocky Ross, SIGACT News)
"Since 1976, this has been the definitive reference work on computer, computing, and computer science. […] Alphabetically arranged and classified into broad subject areas, the entries cover hardware, computer systems, information and data, software, the mathematics of computing, theory of computation, methodologies, applications, and computing milieu. The editors have done a commendable job of blending historical perspective and practical reference information. The encyclopedia remains essential for most public and academic library reference collections." (Joe Accardin, Northeastern Illinois Univ., Chicago)
Cohen, Bernard (2000). Howard Aiken, Portrait of a computer pioneer. The MIT press. ISBN978-0-2625317-9-5.
+
Tedre, Matti (2014). The Science of Computing: Shaping a Discipline. CRC Press, Taylor & Francis.
+
Randell, Brian (1973). The origins of Digital computers, Selected Papers. Springer-Verlag. ISBN3-540-06169-X.
+
+
"Covering a period from 1966 to 1993, its interest lies not only in the content of each of these papers — still timely today — but also in their being put together so that ideas expressed at different times complement each other nicely." (N. Bernard, Zentralblatt MATH)
Research evaluation for computer science, Informatics Europe report. Shorter journal version: Bertrand Meyer, Christine Choppy, Jan van Leeuwen and Jorgen Staunstrup, Research evaluation for computer science, in Communications of the ACM, vol. 52, no. 4, pp. 31–34, April 2009.
Norman Gibbs, Allen Tucker. "A model curriculum for a liberal arts degree in computer science". Communications of the ACM, Volume 29 Issue 3, March 1986.
CiteSeerx (article): search engine, digital library and repository for scientific and academic papers with a focus on computer and information science.
Concurrent computing is a form of computing in which several computations are executing during overlapping time periods—concurrently—instead of sequentially (one completing before the next starts). This is a property of a system—this may be an individual program, a computer, or a network—and there is a separate execution point or "thread of control" for each computation ("process"). A concurrent system is one where a computation can advance without waiting for all other computations to complete; where more than one computation can advance at the same time.[1]
Concurrent computing is related to but distinct from parallel computing, though these concepts are frequently confused,[2][3] and both can be described as "multiple processes executing during the same period of time". In parallel computing, execution occurs at the same physical instant, for example on separate processors of a multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a] By contrast, concurrent computing consists of process lifetimes overlapping, but execution need not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such as multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.[4]:1
+
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.
+
Concurrent computations may be executed in parallel,[2][5] for example by assigning each process to a separate processor or processor core, or distributing a computation across a network, but in general, the languages, tools and techniques for parallel programming may not be suitable for concurrent programming, and vice versa.
+
The exact timing of when tasks in a concurrent system are executed depend on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:
+
+
T1 may be executed and finished before T2 or vice versa (serial and sequential);
+
T1 and T2 may be executed alternately (serial and concurrent);
+
T1 and T2 may be executed simultaneously at the same instant of time (parallel and concurrent).
+
+
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs.[6] A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentually, no concurrency: no task begins until the prior task ends) is called a serial schedule. A set of tasks that can be scheduled serially is serializable, which simplifies concurrency control.
The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[5] Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource balance:
Suppose balance = 500, and two concurrent threads make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources need the use of concurrency control, or non-blocking algorithms.
+
Because concurrent systems rely on the use of shared resources (including communication media), concurrent computing in general needs the use of some form of arbiter somewhere in the implementation to mediate access to these resources.
+
Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.
Increased program throughput—parallel execution of a concurrent program allows the number of tasks completed in a given time to increase.
+
High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
+
More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes.
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe.
Concurrent components communicate by exchanging messages (exemplified by Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming.[citation needed] A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing, with or without shared memory cache coherence.
+
+
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s).
+
The academic study of concurrent algorithms started in the 1960s, with Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion.[7]
Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.[citation needed]
+
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:
+
+
Ada—general purpose, with native support for message passing and monitor based concurrency
+
Alef—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs
+
Alice—extension to Standard ML, adds support for concurrency via futures
ECMAScript—promises available in various libraries, proposed for inclusion in standard in ECMAScript 6
+
Eiffel—through its SCOOP mechanism based on the concepts of Design by Contract
+
Elixir—dynamic and functional meta-programming aware language running on the Erlang VM.
+
Erlang—uses asynchronous message passing with nothing shared
+
FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler
+
Fortran—coarrays and do concurrent are part of Fortran 2008 standard
+
Go—for system programming, with a concurrent programming model based on CSP
+
Hume—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
Rust—for system programming, focus on massive concurrency, using message-passing with move semantics, shared immutable memory, and shared mutable memory that is provably free of race conditions.[9]
+
SALSA—actor-based with token-passing, join, and first-class continuations for distributed computing over the Internet
+
Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way
+
SequenceL—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race conditions
^This is discounting parallelism internal to a processor core, such as pipelining or vectorized instructions. A one-core, one-processor machine may be capable of some parallelism, such as with a coprocessor, but the processor alone is not.
^Armstrong, Joe (2003). "Making reliable distributed systems in the presence of software errors".Missing or empty |url= (help);|access-date= requires |url= (help)
Patterson, David A.; Hennessy, John L. (2013). Computer Organization and Design: The Hardware/Software Interface. The Morgan Kaufmann Series in Computer Architecture and Design (5 ed.). Morgan Kaufmann. ISBN978-0-12407886-4.
Filman, Robert E.; Daniel P. Friedman (1984). Coordinated Computing: Tools and Techniques for Distributed Software. New York: McGraw-Hill. p. 370. ISBN0-07-022439-0.
+Representation of consciousness from the seventeenth century
+
+
+
Consciousness is the state or quality of awareness, or, of being aware of an external object or something within oneself.[1][2] It has been defined as: sentience, awareness, subjectivity, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind.[3] Despite the difficulty in definition, many philosophers believe that there is a broadly shared underlying intuition about what consciousness is.[4] As Max Velmans and Susan Schneider wrote in The Blackwell Companion to Consciousness: "Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives."[5]
+
Western philosophers, since the time of Descartes and Locke, have struggled to comprehend the nature of consciousness and pin down its essential properties. Issues of concern in the philosophy of consciousness include whether the concept is fundamentally coherent; whether consciousness can ever be explained mechanistically; whether non-human consciousness exists and if so how can it be recognized; how consciousness relates to language; whether consciousness can be understood in a way that does not require a dualistic distinction between mental and physical states or properties; and whether it may ever be possible for computing machines like computers or robots to be conscious, a topic studied in the field of artificial intelligence.
+
Thanks to recent developments in technology, consciousness has become a significant topic of research in psychology, neuropsychology and neuroscience within the past few decades. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. The majority of experimental studies assess consciousness by asking human subjects for a verbal report of their experiences (e.g., "tell me if you notice anything when I do this"). Issues of interest include phenomena such as subliminal perception, blindsight, denial of impairment, and altered states of consciousness produced by alcohol and other drugs, or spiritual or meditative techniques.
+
In medicine, consciousness is assessed by observing a patient's arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli.[6] Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted.[7]
+John Locke, British philosopher active in the 17th century
+
+
+
The origin of the modern concept of consciousness is often attributed to John Locke's Essay Concerning Human Understanding, published in 1690.[8] Locke defined consciousness as "the perception of what passes in a man's own mind".[9] His essay influenced the 18th-century view of consciousness, and his definition appeared in Samuel Johnson's celebrated Dictionary (1755).[10] "Consciousness" (French: conscience) is also defined in the 1753 volume of Diderot and d'Alembert's Encyclopédie, as "the opinion or internal feeling that we ourselves have from what we do." [11]
+
The earliest English language uses of "conscious" and "consciousness" date back, however, to the 1500s. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know"), but the Latin word did not have the same meaning as our word—it meant "knowing with", in other words "having joint or common knowledge with another".[12] There were, however, many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase had the figurative meaning of "knowing that one knows", as the modern English word "conscious" does. In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another."[13] The Latin phrase conscius sibi, whose meaning was more closely related to the current concept of consciousness, was rendered in English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness".[14] Locke's definition from 1690 illustrates that a gradual shift in meaning had taken place.
+
A related word was conscientia, which primarily means moralconscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero.[15] Here, conscientia is the knowledge that a witness has of the deed of someone else.[16]René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning.[17] Descartes used conscientia the way modern speakers would use "conscience". In Search after Truth (Regulæ ad directionem ingenii ut et inquisitio veritatis per lumen naturale, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio).[18][19]
The dictionary meaning of the word consciousness extends through several centuries and associated cognate meanings which have ranged from formal definitions to somewhat more skeptical definitions. One formal definition indicating the range of these cognate meanings is given in Webster's Third New International Dictionary stating that consciousness is: "(1) a. awareness or perception of an inward psychological or spiritual fact: intuitively perceived knowledge of something in one's inner self. b. inward awareness of an external object, state, or fact. c: concerned awareness: INTEREST, CONCERN -- often used with an attributive noun. (2): the state or activity that is characterized by sensation, emotion, volition, or thought: mind in the broadest possible sense: something in nature that is distinguished from the physical. (3): the totality in psychology of sensations, perceptions, ideas, attitudes and feelings of which an individual or a group is aware at any given time or within a particular time span -- compare STREAM OF CONSCIOUSNESS."
Consciousness—Philosophers have used the term 'consciousness' for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.[20]
+
+
In a more skeptical definition of consciousness, Stuart Sutherland has exemplified some of the difficulties in fully ascertaining all of its cognate meanings in his entry for the 1989 version of the Macmillan Dictionary of Psychology:
+
+
Consciousness—The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it.[21]
+
+
Most writers on the philosophy of consciousness have been concerned to defend a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues.
Philosophers and non-philosophers differ in their intuitions about what consciousness is.[22] While most people have a strong intuition for the existence of what they refer to as consciousness,[23] skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.[24] More generally, many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness.[21]
Many philosophers have argued that consciousness is a unitary concept that is understood intuitively by the majority of people in spite of the difficulty in defining it.[23] Others, though, have argued that the level of disagreement about the meaning of the word indicates that it either means different things to different people (for instance, the objective versus subjective aspects of consciousness), or else is an umbrella term encompassing a variety of distinct meanings with no simple element in common.[25]
+
Ned Block proposed a distinction between two types of consciousness that he called phenomenal (P-consciousness) and access (A-consciousness).[26] P-consciousness, according to Block, is simply raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction,[27] others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.[28]
+
Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.[29]
+
There is also debate in whether or not a-consciousness and p-consciousness always co-exist or if they can exist separately. Although p-consciousness without a-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block for instance suggests the case of a “zombie” that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don’t know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility." [30]
+Illustration of dualism by René Descartes. Inputs are passed by the sensory organs to the pineal gland and from there to the immaterial spirit.
+
+
+
Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated: but what is the basis of this connection and correlation between what seem to be two very different kinds of processes?
+
The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension).[31] He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.[32]
+
Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed.[33] However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes' rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these camps.[34]
+
Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract.[35] The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman[36] and Antonio Damasio,[37] and by philosophers such as Daniel Dennett,[38] seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch,[39] have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness.[40]
+
A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness.[41] Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories has been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel[42] could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.[43]
+
Apart from the general question of the "hard problem" of consciousness, roughly speaking, the question of how mental experience arises from a physical basis,[44] a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum.
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not?[45] This is called the problem of other minds.[46] It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness.[47] Related issues have also been studied extensively by Greg Littmann of the University of Illinois.[48] and Colin Allen a professor at Indiana University regarding the literature and research studying artificial intelligence in androids.[49]
+
The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do.[50] There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe.[50] Some philosophers, such as Daniel Dennett in an essay titled The Unimagined Preposterousness of Zombies, argue that people who give this explanation do not really understand what they are saying.[51] More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences.[52]
The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell us about their experiences.[53] Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind.[54] Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.[53]
+
Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism — something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself.[55] Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent.[56] Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive — Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence.[57]
+
On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey:
+
"We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society."[58]
+
"Convergent evidence indicates that non-human animals [...], including all mammals and birds, and other creatures, [...] have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."[59]
The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay.[60] However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:
+
+
It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. ... The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.[61]
+
+
One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test.[62] To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious,[63] while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious.[64] A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology.[48][49]
+
In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due simply to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.[65][66]
+
In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated.[67] Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version,[68] which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition.[69]
For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods.[70] In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones.[71] Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books,[72] journals such as Consciousness and Cognition, Frontiers in Consciousness Research, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness.[73]
+
Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it.[39]
Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness.[74] For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same.[75] The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness.[76]
+
Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues.[77] For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected.[78]Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted.[79] Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness.[80]
+
Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion.[77] In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent.[81] The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness.[76] Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains.[82]
+
Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves.[83] Humans (older than 18 months) and other great apes, bottlenose dolphins, killer whales, pigeons, European magpies and elephants have all been observed to pass this test.[84]
Schema of the neural processes underlying consciousness, from Christof Koch
+
+
+
+
A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies.[85]
+
Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience.[86]Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.[87]
+
A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex show clear electrical responses to a stimulus.[88] Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity.[89] The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry).[90]
+
Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony.[91] An fMRI investigation suggested that these findings were strictly limited to the primary visual areas.[92] This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia.
+
In 2011, Graziano and Kastner[93] proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X.
+
In the 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state,[94] making it potentially useful as a quantitative assessment of consciousness states.
+
Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious — a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds — there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologues can be identified? The general conclusion from the study by Butler, et al.,[95] is that some of the major theories for the mammalian brain [96][97][98] also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch,[96] Edelman and Tononi,[97] and Cotterill [98] seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity.[97] Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologues have a different morphology. Likewise, the theory of Eccles[99][100] seems incompatible, since a structural homologue/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists.
+
Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans.[101]
Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles.[102] Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago.[103]Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness.[57] Each of these scenarios raises the question of the possible survival value of consciousness.
+
Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity — “as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery”.[104] To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious.[105][106]Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument.[107]
+
Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent.[108] This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner.[109] Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyses, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect.[110] Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of E. Morsella.[110]
+
As noted earlier, even among writers who consider consciousness to be a well-defined thing, there is widespread dispute about which animals other than humans can be said to possess it.[111] Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness.[112] In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics").[113] Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms.[114]Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality.[115] This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends.
+
Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes.[116][117] No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide.[118] As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement.[119] Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired.[120] Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above).
There are some brain states in which consciousness seems to be absent, including dreamless sleep, coma, and death. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage.[121] Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance.[122]
+
The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions.[123] Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed.[124]
+
Research conducted on the effects of partial epileptic seizures on consciousness found that patients who suffer from partial epileptic seizures experience altered states of consciousness.[125][126] In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention.
+
A variety of psychoactive drugs and alcohol have notable effects on consciousness.[127] These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics.[121]LSD, mescaline, psilocybin, Dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol,[127] but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role.[128]
+
There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness.[129]
+
The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment.[130] Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts.[131]
Phenomenology is a method of inquiry that attempts to examine the structure of consciousness in its own right, putting aside problems regarding the relationship of consciousness to the physical world. This approach was first proposed by the philosopher Edmund Husserl, and later elaborated by other philosophers and scientists.[132] Husserl's original concept gave rise to two distinct lines of inquiry, in philosophy and psychology. In philosophy, phenomenology has largely been devoted to fundamental metaphysical questions, such as the nature of intentionality ("aboutness"). In psychology, phenomenology largely has meant attempting to investigate consciousness using the method of introspection, which means looking into one's own mind and reporting what one observes. This method fell into disrepute in the early twentieth century because of grave doubts about its reliability, but has been rehabilitated to some degree, especially when used in combination with techniques for examining brain activity.[133]
+
+
+
+
+Neon color spreading effect. The apparent bluish tinge of the white areas inside the circle is an illusion.
+
+
+
+
+
+
+Square version of the neon spread illusion
+
+
+
Introspectively, the world of conscious experience seems to have considerable structure. Immanuel Kant asserted that the world as we perceive it is organized according to a set of fundamental "intuitions", which include object (we perceive the world as a set of distinct things); shape; quality (color, warmth, etc.); space (distance, direction, and location); and time.[134] Some of these constructs, such as space and time, correspond to the way the world is structured by the laws of physics; for others the correspondence is not as clear. Understanding the physical basis of qualities, such as redness or pain, has been particularly challenging. David Chalmers has called this the hard problem of consciousness.[28] Some philosophers have argued that it is intrinsically unsolvable, because qualities ("qualia") are ineffable; that is, they are "raw feels", incapable of being analyzed into component processes.[135] Most psychologists and neuroscientists reject these arguments. For example, research on ideasthesia shows that qualia are organised into a semantic-like network. Nevertheless, it is clear that the relationship between a physical entity such as light and a perceptual quality such as color is extraordinarily complex and indirect, as demonstrated by a variety of optical illusions such as neon color spreading.[136]
+
In neuroscience, a great deal of effort has gone into investigating how the perceived world of conscious awareness is constructed inside the brain. The process is generally thought to involve two primary mechanisms: (1) hierarchical processing of sensory inputs, and (2) memory. Signals arising from sensory organs are transmitted to the brain and then processed in a series of stages, which extract multiple types of information from the raw input. In the visual system, for example, sensory signals from the eyes are transmitted to the thalamus and then to the primary visual cortex; inside the cerebral cortex they are sent to areas that extract features such as three-dimensional structure, shape, color, and motion.[137] Memory comes into play in at least two ways. First, it allows sensory information to be evaluated in the context of previous experience. Second, and even more importantly, working memory allows information to be integrated over time so that it can generate a stable representation of the world—Gerald Edelman expressed this point vividly by titling one of his books about consciousness The Remembered Present.[138] In computational neuroscience, Bayesian approaches to brain function have been used to understand both the evaluation of sensory information in light of previous experience, and the integration of information over time. Bayesian models of the brain are probabilistic inference models, in which the brain takes advantage of prior knowledge to interpret uncertain sensory inputs in order to formulate a conscious percept; Bayesian models have successfully predicted many perceptual phenomena in vision and the nonvisual senses.[139][140][141]
+
Despite the large amount of information available, many important aspects of perception remain mysterious. A great deal is known about low-level signal processing in sensory systems, but the ways by which sensory systems interact with each other, with "executive" systems in the frontal cortex, and with the language system are very incompletely understood. At a deeper level, there are still basic conceptual issues that remain unresolved.[137] Many scientists have found it difficult to reconcile the fact that information is distributed across multiple brain areas with the apparent unity of consciousness: this is one aspect of the so-called binding problem.[142] There are also some scientists who have expressed grave reservations about the idea that the brain forms representations of the outside world at all: influential members of this group include psychologist J. J. Gibson and roboticist Rodney Brooks, who both argued in favor of "intelligence without representation".[143]
The medical approach to consciousness is practically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. Whereas the philosophical approach to consciousness focuses on its fundamental nature and its contents, the medical approach focuses on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end.[144]
+
Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may suffer from disorders of consciousness, or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma.[144] Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as Karen Ann Quinlan,[145] while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works.[146]
In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment.[81] There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious.[147]
+
The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3—15, with a score of 3 indicating brain death (the lowest defined level of consciousness), and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language.[144]
+
In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity.[148]
The patient has awareness, sleep-wake cycles, and meaningful behavior (viz., eye-movement), but is isolated due to quadriplegia and pseudobulbar palsy.
+
+
+
Minimally conscious state
+
The patient has intermittent periods of awareness and wakefulness and displays some meaningful behavior.
+
+
+
Persistent vegetative state
+
The patient has sleep-wake cycles, but lacks awareness and only displays reflexive and non-purposeful behavior.
+
+
+
Chronic coma
+
The patient lacks awareness and sleep-wake cycles and only displays reflexive behavior.
+
+
+
Brain death
+
The patient lacks awareness, sleep-wake cycles, and brain-mediated reflexive behavior.
One of the most striking disorders of consciousness goes by the name anosognosia, a Greek-derived term meaning unawareness of disease. This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them.[154] The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the right with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the right side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that doesn't make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary.[155]
William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890. According to James, the "stream of thought" is governed by five characteristics: "(1) Every thought tends to be part of a personal consciousness. (2) Within each personal consciousness thought is always changing. (3) Within each personal consciousness thought is sensibly continuous. (4) It always appears to deal with objects independent of itself. (5) It is interested in some parts of these objects to the exclusion of others".[156] A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". In the Buddhist view, though, the "mindstream" is viewed primarily as a source of noise that distracts attention from a changeless underlying reality.[157]
In the west, the primary impact of the idea has been on literature rather than science: stream of consciousness as a narrative mode means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologues of Shakespeare's plays, and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers.[158]
+
Here for example is a passage from Joyce's Ulysses about the thoughts of Molly Bloom:
+
+
Yes because he never did a thing like that before as ask to get his breakfast in bed with a couple of eggs since the City Arms hotel when he used to be pretending to be laid up with a sick voice doing his highness to make himself interesting for that old faggot Mrs Riordan that he thought he had a great leg of and she never left us a farthing all for masses for herself and her soul greatest miser ever was actually afraid to lay out 4d for her methylated spirit telling me all her ailments she had too much old chat in her about politics and earthquakes and the end of the world let us have a bit of fun first God help the world if all the women were her sort down on bathingsuits and lownecks of course nobody wanted her to wear them I suppose she was pious because no man would look at her twice I hope Ill never be like her a wonder she didnt want us to cover our faces but she was a welleducated woman certainly and her gabby talk about Mr Riordan here and Mr Riordan there I suppose he was glad to get shut of her.[159]
To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world. Krishna consciousness, for example, is a term used to mean an intimate linkage between the mind of a worshipper and the god Krishna.[160] The mystical psychiatrist Richard Maurice Bucke distinguished between three types of consciousness: Simple Consciousness, awareness of the body, possessed by many animals; Self Consciousness, awareness of being aware, possessed only by humans; and Cosmic Consciousness, awareness of the life and order of the universe, possessed only by humans who are enlightened.[161] Many more examples could be given. The most thorough account of the spiritual approach may be Ken Wilber's book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels.[162]
^Robert van Gulick (2004). "Consciousness". Stanford Encyclopedia of Philosophy.
+
^Farthing G (1992). The Psychology of Consciousness. Prentice Hall. ISBN978-0-13-728668-3.
+
^John Searle (2005). "Consciousness". In Honderich T. The Oxford companion to philosophy. Oxford University Press. ISBN978-0-19-926479-7.
+
^Susan Schneider and Max Velmans (2008). "Introduction". In Max Velmans, Susan Schneider. The Blackwell Companion to Consciousness. Wiley. ISBN978-0-470-75145-9.
+
^Güven Güzeldere (1997). Ned Block, Owen Flanagan, Güven Güzeldere, eds. The Nature of Consciousness: Philosophical debates. Cambridge, MA: MIT Press. pp. 1–67.
+
^J. J. Fins, N. D. Schiff, and K. M. Foley (2007). "Late recovery from the minimally conscious state: ethical and policy implications". Neurology68 (4): 304–307. doi:10.1212/01.wnl.0000252376.43779.96. PMID17242341.
^Jaucourt, Louis, chevalier de. "Consciousness." The Encyclopedia of Diderot & d'Alembert Collaborative Translation Project. Translated by Scott St. Louis. Ann Arbor: Michigan Publishing, University of Michigan Library, 2014. http://hdl.handle.net/2027/spo.did2222.0002.986. Originally published as "Conscience," Encyclopédie ou Dictionnaire raisonné des sciences, des arts et des métiers, 3:902 (Paris, 1753).
^Barbara Cassin (2014). Dictionary of Untranslatables. A Philosophical Lexicon. Princeton University Press. p. 176. ISBN978-0-691-13870-1.
+
^G. Molenaar (1969). "Seneca's Use of the Term Conscientia". Mnemosyne22: 170–180. doi:10.1163/156852569x00670.
+
^Boris Hennig (2007). "Cartesian Conscientia". British Journal for the History of Philosophy15: 455–484. doi:10.1080/09608780701444915.
+
^Charles Adam, Paul Tannery (eds.), Oeuvres de Descartes X, 524 (1908).
+
^Sara Heinämaa, Vili Lähteenmäki, Pauliina Remes (eds.) (2007). Consciousness: from perception to reflection in the history of philosophy. Springer. pp. 205–206. ISBN978-1-4020-6081-6.
^Antonio Damasio (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Press. ISBN978-0-15-601075-7.
^Stuart J. Russell, Peter Norvig (2010). "Ch. 26: Philosophical foundations". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN978-0-13-604259-4.
^Moshe Idel (1990). Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. SUNY Press. ISBN978-0-7914-0160-6. Note: In many stories the Golem was mindless, but some gave it emotions or thoughts.
^Horst Hendriks-Jansen (1996). Catching ourselves in the act: situated activity, interactive emergence, evolution, and human thought. Massachusetts Institute of Technology. p. 114. ISBN0-262-08246-2.
+
^Mandler, G. Consciousness: Respectable, useful, and probably necessary. In R.Solso (Ed.)Information processing and cognition: NJ: LEA.
+
^Mandler, G. Consciousness recovered: Psychological functions and origins of thought. Philadelphia: John Benjamins. 2002
^Paul Rooks and Jane Wilson (2000). Perception: Theory, Development, and Organization. Psychology Press. pp. 25–26. ISBN978-0-415-19094-7.
+
^ abThomas Schmidt and Dirk Vorberg (2006). "Criteria for unconscious cognition: Three types of dissociation". Perception and Psychophysics68 (3): 489–504. doi:10.3758/bf03193692. PMID16900839.
+
^ abArnaud Destrebecqz and Philippe Peigneux (2006). "Methods for studying unconscious learning". In Steven Laureys. The Boundaries of Consciousness: Neurobiology and Neuropathology. Elsevier. pp. 69–80. ISBN978-0-444-52876-6.
^Daniel Dennett (2003). "Who's on first? Heterophenomenology explained". Journal of Consciousness Studies10: 19–30.
+
^David Chalmers (1996). "Ch. 3: Can consciousness be reductively explained?". The Conscious Mind. Oxford University Press. ISBN978-0-19-511789-9.
+
^ abJ. T. Giacino, C. M. Smart (2007). "Recent advances in behavioral assessment of individuals with disorders of consciousness". Current Opinion in Neurology20 (6): 614–619. doi:10.1097/WCO.0b013e3282f189ef. PMID17992078.
+
^Patrick Haggard (2008). "Human volition: towards a neuroscience of will". Nature Reviews Neuroscience9 (12): 934–946. doi:10.1038/nrn2497. PMID19020512.
^Biederlack J., Castelo-Branco M., Neuenschwander S., Wheeler D.W., Singer W., Nikolić D. (2006). "Brightness induction: Rate enhancement and neuronal synchronization as complementary codes". Neuron52: 1073–1083. doi:10.1016/j.neuron.2006.11.012.
+
^Williams Adrian L., Singh Krishna D., Smith Andrew T. (2003). "Surround modulation measured with functional MRI in the human visual cortex". Journal of Neurophysiology89 (1): 525–533. doi:10.1152/jn.00048.2002.
^Adenauer G. Casali; Olivia Gosseries; Mario Rosanova; Mélanie Boly; Simone Sarasso; Karina R. Casali; Silvia Casarotto; Marie-Aurélie Bruno; Steven Laureys; Giulio Tononi; Marcello Massimini (14 August 2013). "A Ttheoretically based index of consciousness independent of sensory processing and behavior". Science Translational Medicine5 (198): 198ra105. doi:10.1126/scitranslmed.3006294.
+
^Ann B. Butler, Paul R. Manger, B.I.B Lindahl, and Peter Århem (2005). "Evolution of the neural basis of consciousness: a bird-mammal comparison". BioEssays27: 923–936. doi:10.1002/bies.20280.
^ abRodney M.J. Cotterill (2001). "Cooperation of the basal ganglia, cerebellum, sensory cerebrum and hippocampus: possible implications for cognition, consciousness, intelligence and creativity". Progress in Neurobiology64: 1–33. doi:10.1016/s0301-0082(00)00058-7.
^John Eccles (1990). "A unitary hypothesis of mind-brain interaction in the cerebral cortex". Proceedings of the Royal Society of London, B240: 433–451. doi:10.1098/rspb.1990.0047.
+
^Joaquin Fuster, The Prefrontal Cortex, Second Edition.
^Seth, Anil; Eugene Izhikevich; George Reeke; Gerald Edelman (2006). "Theories and measures of consciousness: An extended framework". Proceedings of the National Academy of Sciences103 (28): 10799–10804. doi:10.1073/pnas.0604347103.
+
^ abEzequiel Morsella (2005). "The function of phenomenal states: Supramodular Interaction Theory". Psychological Review112 (4): 1000–1021. doi:10.1037/0033-295X.112.4.1000. PMID16262477.
+
^S. Budiansky (1998). If a Lion Could Talk: Animal Intelligence and the Evolution of Consciousness. The Free Press. ISBN978-0-684-83710-9.
+
^S. Nichols, T. Grantham (2000). "Adaptive Complexity and Phenomenal Consciousness". Philosophy of Science67: 648–670. doi:10.1086/392859.
^Stevan Harnad (2002). "Turing indistinguishability and the Blind Watchmaker". In J. H. Fetzer. Consciousness Evolving. John Benjamins. Retrieved 2011-10-26.
^Zack ROBINSON, Corey J. MALEY, Gualtiero PICCININI (2015). "Is Consciousness a Spandrel?.". Journal of the American Philosophical Association1: 365–383. doi:10.1017/apa.2014.10.
^Schacter, Daniel; Gilbert, Daniel; Wegner, Daniel (2011). Psychology 2nd Ed. 41 Madison Avenue, New York, NY 10010: Worth Publishers. p. 190. ISBN1-4292-3719-8.
^J. Allan Hobson, Edward F. Pace-Schott, and Robert Stickgold (2003). "Dreaming and the brain: Toward a cognitive neuroscience of conscious states". In Edward F. Pace-Schott, Mark Solms, Mark Blagrove, Stevan Harnad. Sleep and Dreaming: Scientific Advances and Reconsiderations. Cambridge University Press. ISBN978-0-521-00869-3.
+
^] Johanson, M., Valli, K., Revonsuo, A., & Wedlund, J., 2008. Content analysis of subjective experiences in partial epileptic seizures. Epilepsy and Behavior, 12, pp 170-182
+
^Johanson M., Valli K., Revonsuo A.; et al. (2008). "Alterations in the contents of consciousness in partial epileptic seizures". Epilepsy and Behavior13: 366–371. doi:10.1016/j.yebeh.2008.04.014.CS1 maint: Explicit use of et al. (link)
^M. Murphy, S. Donovan, and E. Taylor (1997). The Physical and Psychological Effects of Meditation: A Review of Contemporary Research With a Comprehensive Bibliography, 1931–1996. Institute of Noetic Sciences.
^Robert Sokolowski (2000). Introduction to Phenomenology. Cambridge University Press. pp. 211–227. ISBN978-0-521-66792-0.
+
^K. Anders Ericsson (2003). "Valid and non-reactive verbalization of thoughts during performance of tasks: towards a solution to the central problems of introspection as a source of scientific evidence". In Anthony Jack, Andreas Roepstorff. Trusting the Subject?: The Use of Introspective Evidence in Cognitive Science, Volume 1. Imprint Academic. pp. 1–18. ISBN978-0-907845-56-0.
^Joseph Levine (1998). "On leaving out what it's like". In N. Block, O. Flanagan, G. Guzeldere. The Nature of Consciousness: Philosophical Debates. MIT Press. ISBN978-0-262-52210-6.
+
^Steven K. Shevell (2003). "Color appearance". In Steven K. Shevell. The Science of Color. Elsevier. pp. 149–190. ISBN978-0-444-51251-2.
+
^ abM. R. Bennett and Peter Michael Stephan Hacker (2003). Philosophical Foundations of Neuroscience. Wiley-Blackwell. pp. 121–147. ISBN978-1-4051-0838-6.
^ abcHal Blumenfeld (2009). "The neurological examination of consciousness". In Steven Laureys, Giulio Tononi. The Neurology of Consciousness: Cognitive Neuroscience and Neuropathology. Academic Press. ISBN978-0-12-374168-4.
+
^Kinney HC, Korein J, Panigrahy A, Dikkes P, Goode R (26 May 1994). "Neuropathological findings in the brain of Karen Ann Quinlan -- the role of the thalamus in the persistent vegetative state". N Engl J Med330 (21): 1469–1475. doi:10.1056/NEJM199405263302101. PMID8164698.
^V. Mark Durand and David H. Barlow (2009). Essentials of Abnormal Psychology. Cengage Learning. pp. 74–75. ISBN978-0-495-59982-1. Note: A patient who can additionally describe the current situation may be referred to as "oriented times four".
^Coleman MR, Davis MH, Rodd JM, Robson T, Ali A, Owen AM, Pickard JD (September 2009). "Towards the routine use of brain imaging to aid the clinical diagnosis of disorders of consciousness". Brain132 (9): 2541–2552. doi:10.1093/brain/awp183. PMID19710182.
+
^Monti MM, Vanhaudenhuyse A, Coleman MR, Boly M, Pickard JD, Tshibanda L, Owen AM, Laureys S (18 Feb 2010). "Willful modulation of brain activity in disorders of consciousness". N Engl J Med362 (7): 579–589. doi:10.1056/NEJMoa0905370. PMID20130250.
+
^Seel RT, Sherer M, Whyte J, Katz DI, Giacino JT, Rosenbaum AM, Hammond FM, Kalmar K, Pape TL; et al. (December 2010). "Assessment scales for disorders of consciousness: evidence-based recommendations for clinical practice and research". Arch Phys Med Rehabil91 (12): 1795–1813. doi:10.1016/j.apmr.2011.01.002. PMID21112421.
+
^George P. Prigatano and Daniel Schacter (1991). "Introduction". In George Prigatano, Daniel Schacter. Awareness of Deficit After Brain Injury: Clinical and Theoretical Issues. Oxford University Press. pp. 3–16. ISBN0-19-505941-7.
+
^Kenneth M. Heilman (1991). "Anosognosia: possible neuropsychological mechanisms". In George Prigatano, Daniel Schacter. Awareness of Deficit After Brain Injury: Clinical and Theoretical Issues. Oxford University Press. pp. 53–62. ISBN0-19-505941-7.
+
^William James (1890). The Principles of Psychology, Volume 1. H. Holt. p. 225.
+
^Dzogchen Rinpoche (2007). "Taming the mindstream". In Doris Wolter. Losing the Clouds, Gaining the Sky: Buddhism and the Natural Mind. Wisdom Publications. pp. 81–92. ISBN978-0-86171-359-2.
+
^Robert Humphrey (1954). Stream of Consciousness in the Modern Novel. University of California Press. pp. 23–49. ISBN978-0-520-00585-3.
+
^James Joyce (1990). Ulysses. BompaCrazy.com. p. 620.
+
^Lynne Gibson (2002). Modern World Religions: Hinduism. Heinemann Educational Publishers. pp. 2–4. ISBN0-435-33619-3.
Antonio Damasio (2012). Self Comes to Mind: Constructing the Conscious Brain. Vintage. ISBN978-0307474957.
+
Philip David Zelazo, Morris Moscovitch, Evan Thompson (2007). The Cambridge Handbook of Consciousness. Cambridge University Press. ISBN978-0-521-67412-6.
The latest version is Java 8, which is the only version currently supported for free by Oracle, although earlier versions are supported both by Oracle and other companies on a commercial basis.
James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991.[20] Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time.[21] The language was initially called Oak after an oak tree that stood outside Gosling's office. Later the project went by the name Green and was finally renamed Java, from Java coffee.[22] Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar.[23]
+
Sun Microsystems released the first public implementation as Java 1.0 in 1995.[1] It promised "Write Once, Run Anywhere" (WORA), providing no-cost run-times on popular platforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification.[24] With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 – 1999), new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, and Java SE, respectively.
+
In 1997, Sun Microsystems approached the ISO/IEC JTC 1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process.[25][26][27] Java remains a de facto standard, controlled through the Java Community Process.[28] At one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System.
+
On November 13, 2006, Sun released much of its Java virtual machine (JVM) as free and open-source software, (FOSS), under the terms of the GNU General Public License (GPL). On May 8, 2007, Sun finished the process, making all of its JVM's core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright.[29]
+
Sun's vice-president Rich Green said that Sun's ideal role with regard to Java was as an "evangelist".[30] Following Oracle Corporation's acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the "steward of Java technology with a relentless commitment to fostering a community of participation and transparency".[31] This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK (see Google section below). Java software runs on everything from laptops to data centers, game consoles to scientific supercomputers.[32] On April 2, 2010, James Gosling resigned from Oracle.[33]
+
Principles
+
There were five primary goals in the creation of the Java language:[15]
+
+
It must be "simple, object-oriented, and familiar".
One design goal of Java is portability, which means that programs written for the Java platform must run similarly on any combination of hardware and operating system with adequate runtime support. This is achieved by compiling the Java language code to an intermediate representation called Java bytecode, instead of directly to architecture-specific machine code. Java bytecode instructions are analogous to machine code, but they are intended to be executed by a virtual machine (VM) written specifically for the host hardware. End users commonly use a Java Runtime Environment (JRE) installed on their own machine for standalone Java applications, or in a web browser for Java applets.
+
Standard libraries provide a generic way to access host-specific features such as graphics, threading, and networking.
+
The use of universal bytecode makes porting simple. However, the overhead of interpreting bytecode into machine instructions makes interpreted programs almost always run more slowly than native executables. However, just-in-time (JIT) compilers that compile bytecodes to machine code during runtime were introduced from an early stage. Java itself is platform-independent, and is adapted to the particular platform it is to run on by a Java virtual machine for it, which translates the Java bytecode into the platform's machine language.[34]
Oracle Corporation is the current owner of the official implementation of the Java SE platform, following their acquisition of Sun Microsystems on January 27, 2010. This implementation is based on the original implementation of Java by Sun. The Oracle implementation is available for Microsoft Windows (still works for XP, while only later versions currently "publicly" supported), Mac OS X, Linux and Solaris. Because Java lacks any formal standardization recognized by Ecma International, ISO/IEC, ANSI, or other third-party standards organization, the Oracle implementation is the de facto standard.
+
The Oracle implementation is packaged into two different distributions: The Java Runtime Environment (JRE) which contains the parts of the Java SE platform required to run Java programs and is intended for end users, and the Java Development Kit (JDK), which is intended for software developers and includes development tools such as the Java compiler, Javadoc, Jar, and a debugger.
+
OpenJDK is another notable Java SE implementation that is licensed under the GNU GPL. The implementation started when Sun began releasing the Java source code under the GPL. As of Java SE 7, OpenJDK is the official Java reference implementation.
+
The goal of Java is to make all implementations of Java compatible. Historically, Sun's trademark license for usage of the Java brand insists that all implementations be "compatible". This resulted in a legal dispute with Microsoft after Sun claimed that the Microsoft implementation did not support RMI or JNI and had added platform-specific features of their own. Sun sued in 1997, and in 2001 won a settlement of US$20 million, as well as a court order enforcing the terms of the license from Sun.[35] As a result, Microsoft no longer ships Java with Windows.
+
Platform-independent Java is essential to Java EE, and an even more rigorous validation is required to certify an implementation. This environment enables portable server-side applications.
Programs written in Java have a reputation for being slower and requiring more memory than those written in C++.[36][37] However, Java programs' execution speed improved significantly with the introduction of just-in-time compilation in 1997/1998 for Java 1.1,[38] the addition of language features supporting better code analysis (such as inner classes, the StringBuilder class, optional assertions, etc.), and optimizations in the Java virtual machine, such as HotSpot becoming the default for Sun's JVM in 2000.
+
Some platforms offer direct hardware support for Java; there are microcontrollers that can run Java in hardware instead of a software Java virtual machine, and ARM based processors can have hardware support for executing Java bytecode through their Jazelle option (while its support is mostly dropped in current implementations of ARM).
+
Automatic memory management
+
Java uses an automatic garbage collector to manage memory in the object lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector. Something similar to a memory leak may still occur if a programmer's code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use. If methods for a nonexistent object are called, a "null pointer exception" is thrown.[39][40]
+
One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on the stack, or explicitly allocated and deallocated from the heap. In the latter case the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, a memory leak occurs. If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable and/or crash. This can be partially remedied by the use of smart pointers, but these add overhead and complexity. Note that garbage collection does not prevent "logical" memory leaks, i.e., those where the memory is still referenced but never used.
+
Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java.
+
Java does not support C/C++ style pointer arithmetic, where object addresses and unsigned integers (usually long integers) can be used interchangeably. This allows the garbage collector to relocate referenced objects and ensures type safety and security.
+
As in C++ and some other object-oriented languages, variables of Java's primitive data types are either stored directly in fields (for objects) or on the stack (for methods) rather than on the heap, as is commonly true for non-primitive data types (but see escape analysis). This was a conscious decision by Java's designers for performance reasons.
The syntax of Java is largely influenced by C++. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language.[15] All code is written inside classes, and every data item is an object, with the exception of the primitive data types, i.e. integers, floating-point numbers, boolean values, and characters, which are not objects for performance reasons. Java reuses some popular aspects of C++ (such as printf() method).
Java uses comments similar to those of C++. There are three different styles of comments: a single line style marked with two slashes (//), a multiple line style opened with /* and closed with */, and the Javadoc commenting style opened with /** and closed with */. The Javadoc style of commenting allows the user to run the Javadoc executable to create documentation for the program.
+
Example:
+
+
+// This is an example of a single line comment using two slashes
+
+/* This is an example of a multiple line comment using the slash and asterisk.
+ This type of comment can be used to hold a lot of information or deactivate
+ code, but it is very important to remember to close the comment. */
+
+packagefibsandlies;
+importjava.util.HashMap;
+
+/**
+ * This is an example of a Javadoc comment; Javadoc can compile documentation
+ * from this text. Javadoc comments must immediately precede the class, method, or field being documented.
+ */
+publicclassFibCalculatorextendsFibonacciimplementsCalculator{
+ privatestaticMap<Integer,Integer>memoized=newHashMap<Integer,Integer>();
+
+ /*
+ * The main method written as follows is used by the JVM as a starting point for the program.
+ */
+ publicstaticvoidmain(String[]args){
+ memoized.put(1,1);
+ memoized.put(2,1);
+ System.out.println(fibonacci(12));//Get the 12th Fibonacci number and print to console
+ }
+
+ /**
+ * An example of a method written in Java, wrapped in a class.
+ * Given a non-negative number FIBINDEX, returns
+ * the Nth Fibonacci number, where N equals FIBINDEX.
+ * @param fibIndex The index of the Fibonacci number
+ * @return The Fibonacci number
+ */
+ publicstaticintfibonacci(intfibIndex){
+ if(memoized.containsKey(fibIndex)){
+ returnmemoized.get(fibIndex);
+ }else{
+ intanswer=fibonacci(fibIndex-1)+fibonacci(fibIndex-2);
+ memoized.put(fibIndex,answer);
+ returnanswer;
+ }
+ }
+}
+
+classHelloWorldApp{
+ publicstaticvoidmain(String[]args){
+ System.out.println("Hello World!");// Prints the string to the console.
+ }
+}
+
+
Source files must be named after the public class they contain, appending the suffix .java, for example, HelloWorldApp.java. It must first be compiled into bytecode, using a Java compiler, producing a file named HelloWorldApp.class. Only then can it be executed, or "launched". The Java source file may only contain one public class, but it can contain multiple classes with other than public access and any number of public inner classes. When the source file contains multiple classes, make one class "public" and name the source file with that public class name.
+
A class that is not declared public may be stored in any .java file. The compiler will generate a class file for each class defined in the source file. The name of the class file is the name of the class, with .class appended. For class file generation, anonymous classes are treated as if their name were the concatenation of the name of their enclosing class, a $, and an integer.
+
The keywordpublic denotes that a method can be called from code in other classes, or that a class may be used by classes outside the class hierarchy. The class hierarchy is related to the name of the directory in which the .java file is located. This is called an access level modifier. Other access level modifiers include the keywords private and protected.
+
The keyword static in front of a method indicates a static method, which is associated only with the class and not with any specific instance of that class. Only static methods can be invoked without a reference to an object. Static methods cannot access any class members that are not also static. Methods that are not designated static are instance methods, and require a specific instance of a class to operate.
+
The keyword void indicates that the main method does not return any value to the caller. If a Java program is to exit with an error code, it must call System.exit() explicitly.
+
The method name "main" is not a keyword in the Java language. It is simply the name of the method the Java launcher calls to pass control to the program. Java classes that run in managed environments such as applets and Enterprise JavaBeans do not use or need a main() method. A Java program may contain multiple classes that have main methods, which means that the VM needs to be explicitly told which class to launch from.
+
The main method must accept an array of String objects. By convention, it is referenced as args although any other legal identifier name can be used. Since Java 5, the main method can also use variable arguments, in the form of public static void main(String... args), allowing the main method to be invoked with an arbitrary number of String arguments. The effect of this alternate declaration is semantically identical (the args parameter is still an array of String objects), but it allows an alternative syntax for creating and passing the array.
+
The Java launcher launches Java by loading a given class (specified on the command line or as an attribute in a JAR) and starting its public static void main(String[]) method. Stand-alone programs must declare this method explicitly. The String[] args parameter is an array of String objects containing any arguments passed to the class. The parameters to main are often passed by means of a command line.
+
Printing is part of a Java standard library: The System class defines a public static field called out. The out object is an instance of the PrintStream class and provides many methods for printing data to standard out, including println(String) which also appends a new line to the passed string.
+
The string "Hello World!" is automatically converted to a String object by the compiler.
+
Comprehensive example
+
+
+
+
+
+
+
+
+
This section has multiple issues. Please help improve it or discuss these issues on the talk page.
+// OddEven.java
+importjavax.swing.JOptionPane;
+
+publicclassOddEven{
+
+ privateintuserInput;// a whole number("int" means integer)
+
+ /**
+ * This is the constructor method. It gets called when an object of the OddEven type
+ * is being created.
+ */
+ publicOddEven(){
+ /*
+ * In most Java programs constructors can initialize objects with default values, or create
+ * other objects that this object might use to perform its functions. In some Java programs, the
+ * constructor may simply be an empty function if nothing needs to be initialized prior to the
+ * functioning of the object. In this program's case, an empty constructor would suffice.
+ * A constructor must exist; however, if the user doesn't put one in then the compiler
+ * will create an empty one.
+ */
+ }
+
+ /**
+ * This is the main method. It gets called when this class is run through a Java interpreter.
+ * @param args command line arguments (unused)
+ */
+ publicstaticvoidmain(finalString[]args){
+ /*
+ * This line of code creates a new instance of this class called "number" (also known as an
+ * Object) and initializes it by calling the constructor. The next line of code calls
+ * the "showDialog()" method, which brings up a prompt to ask you for a number.
+ */
+ OddEvennumber=newOddEven();
+ number.showDialog();
+ }
+
+ publicvoidshowDialog(){
+ /*
+ * "try" makes sure nothing goes wrong. If something does,
+ * the interpreter skips to "catch" to see what it should do.
+ */
+ try{
+ /*
+ * The code below brings up a JOptionPane, which is a dialog box
+ * The String returned by the "showInputDialog()" method is converted into
+ * an integer, making the program treat it as a number instead of a word.
+ * After that, this method calls a second method, calculate() that will
+ * display either "Even" or "Odd."
+ */
+ userInput=Integer.parseInt(JOptionPane.showInputDialog("Please enter a number."));
+ calculate();
+ }catch(finalNumberFormatExceptione){
+ /*
+ * Getting in the catch block means that there was a problem with the format of
+ * the number. Probably some letters were typed in instead of a number.
+ */
+ System.err.println("ERROR: Invalid input. Please type in a numerical value.");
+ }
+ }
+
+ /**
+ * When this gets called, it sends a message to the interpreter.
+ * The interpreter usually shows it on the command prompt (For Windows users)
+ * or the terminal (For *nix users).(Assuming it's open)
+ */
+ privatevoidcalculate(){
+ if((userInput%2)==0){
+ JOptionPane.showMessageDialog(null,"Even");
+ }else{
+ JOptionPane.showMessageDialog(null,"Odd");
+ }
+ }
+}
+
The OddEven class declares a single privatefield of type int named userInput. Every instance of the OddEven class has its own copy of the userInput field. The private declaration means that no other class can access (read or write) the userInput field.
+
OddEven() is a publicconstructor. Constructors have the same name as the enclosing class they are declared in, and unlike a method, have no return type. A constructor is used to initialize an object that is a newly created instance of the class.
+
The calculate() method is declared without the static keyword. This means that the method is invoked using a specific instance of the OddEven class. (The reference used to invoke the method is passed as an undeclared parameter of type OddEven named this.) The method tests the expression userInput % 2 == 0 using the if keyword to see if the remainder of dividing the userInput field belonging to the instance of the class by two is zero. If this expression is true, then it prints Even; if this expression is false it prints Odd. (The calculate method can be equivalently accessed as this.calculate and the userInput field can be equivalently accessed as this.userInput, which both explicitly use the undeclared this parameter.)
+
OddEven number = new OddEven(); declares a local object reference variable in the main method named number. This variable can hold a reference to an object of type OddEven. The declaration initializes number by first creating an instance of the OddEven class, using the new keyword and the OddEven() constructor, and then assigning this instance to the variable.
+
The statement number.showDialog(); calls the calculate method. The instance of OddEven object referenced by the numberlocal variable is used to invoke the method and passed as the undeclared this parameter to the calculate method.
+
userInput = Integer.parseInt(JOptionPane.showInputDialog("Please Enter A Number")); is a statement that converts the type of String to the primitive data typeint by using a utility function in the primitive wrapper classInteger.
The import statements direct the Java compiler to include the javax.swing.JApplet and java.awt.Graphics classes in the compilation. The import statement allows these classes to be referenced in the source code using the simple class name (i.e. JApplet) instead of the fully qualified class name (FQCN, i.e. javax.swing.JApplet).
+
The Hello class extends (subclasses) the JApplet (Java Applet) class; the JApplet class provides the framework for the host application to display and control the lifecycle of the applet. The JApplet class is a JComponent (Java Graphical Component) which provides the applet with the capability to display a graphical user interface (GUI) and respond to user events.
+
The Hello class overrides the paintComponent(Graphics) method (additionally indicated with the annotation, supported as of JDK 1.5, Override) inherited from the Containersuperclass to provide the code to display the applet. The paintComponent() method is passed a Graphics object that contains the graphic context used to display the applet. The paintComponent() method calls the graphic context drawString(String, int, int) method to display the "Hello, world!" string at a pixel offset of (65, 95) from the upper-left corner in the applet's display.
+
+
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
+"http://www.w3.org/TR/html4/strict.dtd">
+<!-- Hello.html -->
+<html>
+ <head>
+ <title>Hello World Applet</title>
+ </head>
+ <body>
+ <appletcode="Hello.class"width="200"height="200">
+ </applet>
+ </body>
+</html>
+
+
An applet is placed in an HTML document using the <applet>HTML element. The applet tag has three attributes set: code="Hello" specifies the name of the JApplet class and width="200" height="200" sets the pixel width and height of the applet. Applets may also be embedded in HTML using either the object or embed element,[45] although support for these elements by web browsers is inconsistent.[46] However, the applet tag is deprecated, so the object tag is preferred where supported.
+
The host application, typically a Web browser, instantiates the Hello applet and creates an AppletContext for the applet. Once the applet has initialized itself, it is added to the AWT display hierarchy. The paintComponent() method is called by the AWT event dispatching thread whenever the display needs the applet to draw itself.
Java Servlet technology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets are server-side Java EE components that generate responses (typically HTML pages) to requests (typically HTTP requests) from clients. A servlet can almost be thought of as an applet that runs on the server side—without a face.
The import statements direct the Java compiler to include all the public classes and interfaces from the java.io and javax.servlet packages in the compilation. Packages make Java well suited for large scale applications.
+
The Hello class extends the GenericServlet class; the GenericServlet class provides the interface for the server to forward requests to the servlet and control the servlet's lifecycle.
+
The Hello class overrides the service(ServletRequest, ServletResponse) method defined by the Servletinterface to provide the code for the service request handler. The service() method is passed: a ServletRequest object that contains the request from the client and a ServletResponse object used to create the response returned to the client. The service() method declares that it throws the exceptionsServletException and IOException if a problem prevents it from responding to the request.
+
The setContentType(String) method in the response object is called to set the MIME content type of the returned data to "text/html". The getWriter() method in the response returns a PrintWriter object that is used to write the data that is sent to the client. The println(String) method is called to write the "Hello, world!" string to the response and then the close() method is called to close the print writer, which causes the data that has been written to the stream to be returned to the client.
JavaServer Pages (JSP) are server-side Java EE components that generate responses, typically HTML pages, to HTTP requests from clients. JSPs embed Java code in an HTML page by using the special delimiters<% and %>. A JSP is compiled to a Java servlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response.
Swing is a graphical user interface library for the Java SE platform. It is possible to specify a different look and feel through the pluggable look and feel system of Swing. Clones of Windows, GTK+ and Motif are supplied by Sun. Apple also provides an Aqua look and feel for Mac OS X. Where prior implementations of these looks and feels may have been considered lacking, Swing in Java SE 6 addresses this problem by using more native GUI widget drawing routines of the underlying platforms.
+
This example Swing application creates a single window with "Hello, world!" inside:
The first import includes all the public classes and interfaces from the javax.swing package.
+
The Hello class extends the JFrame class; the JFrame class implements a window with a title bar and a close control.
+
The Hello()constructor initializes the frame by first calling the superclass constructor, passing the parameter "hello", which is used as the window's title. It then calls the setDefaultCloseOperation(int) method inherited from JFrame to set the default operation when the close control on the title bar is selected to WindowConstants.EXIT_ON_CLOSE – this causes the JFrame to be disposed of when the frame is closed (as opposed to merely hidden), which allows the Java virtual machine to exit and the program to terminate. Next, a JLabel is created for the string "Hello, world!" and the add(Component) method inherited from the Container superclass is called to add the label to the frame. The pack() method inherited from the Window superclass is called to size the window and lay out its contents.
+
The main() method is called by the Java virtual machine when the program starts. It instantiates a new Hello frame and causes it to be displayed by calling the setVisible(boolean) method inherited from the Component superclass with the boolean parameter true. Once the frame is displayed, exiting the main method does not cause the program to terminate because the AWT event dispatching thread remains active until all of the Swing top-level windows have been disposed.
In 2004, generics were added to the Java language, as part of J2SE 5.0. Prior to the introduction of generics, each variable declaration had to be of a specific type. For container classes, for example, this is a problem because there is no easy way to create a container that accepts only specific types of objects. Either the container operates on all subtypes of a class or interface, usually Object, or a different container class has to be created for each contained class. Generics allow compile-time type checking without having to create many container classes, each containing almost identical code. In addition to enabling more efficient code, certain runtime exceptions are converted to compile-time errors, a characteristic known as type safety.
Criticisms directed at Java include the implementation of generics,[47] speed,[48] the handling of unsigned numbers,[49] the implementation of floating-point arithmetic,[50] and a history of security vulnerabilities in the primary Java VM implementation HotSpot.[51]
+
Use on unofficial software platforms
+
The Java programming language requires the presence of a software platform in order for compiled programs to be executed. A well-known unofficial Java-like software platform is the Android software platform, which allows the use of Java 6 and some Java 7 features, uses a different standard library (Apache Harmony reimplementation), different bytecode language and different virtual machine, and is designed for low-memory devices such as smartphones and tablet computers.
+
+
+
+
+The Android operating system makes extensive use of Java-related technology
Google and Android, Inc. have chosen to use Java as a key pillar in the creation of the Android operating system, an open sourcemobile operating system. Although the Android operating system, built on the Linux kernel, was written largely in C, the Android SDK uses the Java language as the basis for Android applications. However, Android does not use the Java virtual machine, instead using Java bytecode as an intermediate step and ultimately targeting Android's own Dalvik virtual machine or more recently Android Runtime which actually compiles applications to native machine code upon installation.
+
Android also does not provide the full Java SE standard library, although the Android class library does include an independent implementation of a large subset of it. This led to a legal dispute between Oracle and Google. On May 7, 2012, a San Francisco jury found that if APIs could be copyrighted, then Google had infringed Oracle's copyrights by the use of Java in Android devices.[52] District Judge William Haskell Alsup ruled on May 31, 2012, that APIs cannot be copyrighted,[53] but this was reversed by the United States Court of Appeals for the Federal Circuit in May 2014.[54][55][56]
The Java Class Library is the standard library, developed to support application development in Java. It is controlled by Sun Microsystems in cooperation with others through the Java Community Process program. Companies or individuals participating in this process can influence the design and development of the APIs. This process has been a subject of controversy.[when?] The class library contains features such as:
The (heavyweight, or native) Abstract Window Toolkit (AWT), which provides GUI components, the means for laying out those components and the means for handling events from those components
+
The (lightweight) Swing libraries, which are built on AWT but provide (non-native) implementations of the AWT widgetry
A platform dependent implementation of the Java virtual machine that is the means by which the bytecodes of the Java libraries and third party applications are executed
+
Plugins, which enable applets to be run in web browsers
Javadoc is a comprehensive documentation system, created by Sun Microsystems, used by many Java developers[by whom?]. It provides developers with an organized system for documenting their code. Javadoc comments have an extra asterisk at the beginning, i.e. the delimiters are /** and */, whereas the normal multi-line comments in Java are set off with the delimiters /* and */.[60]
Sun has defined and supports four editions of Java targeting different application environments and segmented many of its APIs so that they belong to one of the platforms. The platforms are:
The classes in the Java APIs are organized into separate groups called packages. Each package contains a set of related interfaces, classes and exceptions. Refer to the separate platforms for a description of the packages available.[relevant to this section?– discuss]
+
Sun also provided an edition called PersonalJava that has been superseded by later, standards-based Java ME configuration-profile pairings.
^Niklaus Wirth stated on a number of public occasions, e.g. in a lecture at the Polytechnic Museum, Moscow in September, 2005 (several independent first-hand accounts in Russian exist, e.g. one with an audio recording: Filippova, Elena (September 22, 2005). "Niklaus Wirth's lecture at the Polytechnic Museum in Moscow".), that the Sun Java design team licensed the Oberon compiler sources a number of years prior to the release of Java and examined it: a (relative) compactness, type safety, garbage collection, no multiple inheritance for classes – all these key overall design features are shared by Java and Oberon.
+
^Patrick Naughton cites Objective-C as a strong influence on the design of the Java programming language, stating that notable direct derivatives include Java interfaces (derived from Objective-C's protocol) and primitive wrapper classes. [3]
+
^TechMetrix Research (1999). "History of Java"(PDF). Java Application Servers Report. The project went ahead under the name "green" and the language was based on an old model of UCSD Pascal, which makes it possible to generate interpretive code
^In the summer of 1996, Sun was designing the precursor to what is now the event model of the AWT and the JavaBeans TM component architecture. Borland contributed greatly to this process. We looked very carefully at Delphi Object Pascal and built a working prototype of bound method references in order to understand their interaction with the Java programming language and its APIs.White Paper About Microsoft's "Delegates"
^McMillan, Robert (2013-08-01). "Is Java Losing Its Mojo?". wired.com. Java is on the wane, at least according to one outfit that keeps on eye on the ever-changing world of computer programming languages. For more than a decade, it has dominated the Tiobe Programming Community Index – a snapshot of software developer enthusiasm that looks at things like internet search results to measure how much buzz different languages have. But lately, Java has been slipping.
+
^RedMonk Index on redmonk.com (Stephen O'Grady, January 2015)
^"Oracle and Java". oracle.com. Oracle Corporation. Retrieved 2010-08-23. Oracle has been a leading and substantive supporter of Java since its emergence in 1995 and takes on the new role as steward of Java technology with a relentless commitment to fostering a community of participation and transparency.
The latest version is Java 8, which is the only version currently supported for free by Oracle, although earlier versions are supported both by Oracle and other companies on a commercial basis.
James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991.[20] Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time.[21] The language was initially called Oak after an oak tree that stood outside Gosling's office. Later the project went by the name Green and was finally renamed Java, from Java coffee.[22] Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar.[23]
+
Sun Microsystems released the first public implementation as Java 1.0 in 1995.[1] It promised "Write Once, Run Anywhere" (WORA), providing no-cost run-times on popular platforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification.[24] With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 – 1999), new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, and Java SE, respectively.
+
In 1997, Sun Microsystems approached the ISO/IEC JTC 1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process.[25][26][27] Java remains a de facto standard, controlled through the Java Community Process.[28] At one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System.
+
On November 13, 2006, Sun released much of its Java virtual machine (JVM) as free and open-source software, (FOSS), under the terms of the GNU General Public License (GPL). On May 8, 2007, Sun finished the process, making all of its JVM's core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright.[29]
+
Sun's vice-president Rich Green said that Sun's ideal role with regard to Java was as an "evangelist".[30] Following Oracle Corporation's acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the "steward of Java technology with a relentless commitment to fostering a community of participation and transparency".[31] This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK (see Google section below). Java software runs on everything from laptops to data centers, game consoles to scientific supercomputers.[32] On April 2, 2010, James Gosling resigned from Oracle.[33]
+
Principles
+
There were five primary goals in the creation of the Java language:[15]
+
+
It must be "simple, object-oriented, and familiar".
One design goal of Java is portability, which means that programs written for the Java platform must run similarly on any combination of hardware and operating system with adequate runtime support. This is achieved by compiling the Java language code to an intermediate representation called Java bytecode, instead of directly to architecture-specific machine code. Java bytecode instructions are analogous to machine code, but they are intended to be executed by a virtual machine (VM) written specifically for the host hardware. End users commonly use a Java Runtime Environment (JRE) installed on their own machine for standalone Java applications, or in a web browser for Java applets.
+
Standard libraries provide a generic way to access host-specific features such as graphics, threading, and networking.
+
The use of universal bytecode makes porting simple. However, the overhead of interpreting bytecode into machine instructions makes interpreted programs almost always run more slowly than native executables. However, just-in-time (JIT) compilers that compile bytecodes to machine code during runtime were introduced from an early stage. Java itself is platform-independent, and is adapted to the particular platform it is to run on by a Java virtual machine for it, which translates the Java bytecode into the platform's machine language.[34]
Oracle Corporation is the current owner of the official implementation of the Java SE platform, following their acquisition of Sun Microsystems on January 27, 2010. This implementation is based on the original implementation of Java by Sun. The Oracle implementation is available for Microsoft Windows (still works for XP, while only later versions currently "publicly" supported), Mac OS X, Linux and Solaris. Because Java lacks any formal standardization recognized by Ecma International, ISO/IEC, ANSI, or other third-party standards organization, the Oracle implementation is the de facto standard.
+
The Oracle implementation is packaged into two different distributions: The Java Runtime Environment (JRE) which contains the parts of the Java SE platform required to run Java programs and is intended for end users, and the Java Development Kit (JDK), which is intended for software developers and includes development tools such as the Java compiler, Javadoc, Jar, and a debugger.
+
OpenJDK is another notable Java SE implementation that is licensed under the GNU GPL. The implementation started when Sun began releasing the Java source code under the GPL. As of Java SE 7, OpenJDK is the official Java reference implementation.
+
The goal of Java is to make all implementations of Java compatible. Historically, Sun's trademark license for usage of the Java brand insists that all implementations be "compatible". This resulted in a legal dispute with Microsoft after Sun claimed that the Microsoft implementation did not support RMI or JNI and had added platform-specific features of their own. Sun sued in 1997, and in 2001 won a settlement of US$20 million, as well as a court order enforcing the terms of the license from Sun.[35] As a result, Microsoft no longer ships Java with Windows.
+
Platform-independent Java is essential to Java EE, and an even more rigorous validation is required to certify an implementation. This environment enables portable server-side applications.
Programs written in Java have a reputation for being slower and requiring more memory than those written in C++.[36][37] However, Java programs' execution speed improved significantly with the introduction of just-in-time compilation in 1997/1998 for Java 1.1,[38] the addition of language features supporting better code analysis (such as inner classes, the StringBuilder class, optional assertions, etc.), and optimizations in the Java virtual machine, such as HotSpot becoming the default for Sun's JVM in 2000.
+
Some platforms offer direct hardware support for Java; there are microcontrollers that can run Java in hardware instead of a software Java virtual machine, and ARM based processors can have hardware support for executing Java bytecode through their Jazelle option (while its support is mostly dropped in current implementations of ARM).
+
Automatic memory management
+
Java uses an automatic garbage collector to manage memory in the object lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector. Something similar to a memory leak may still occur if a programmer's code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use. If methods for a nonexistent object are called, a "null pointer exception" is thrown.[39][40]
+
One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on the stack, or explicitly allocated and deallocated from the heap. In the latter case the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, a memory leak occurs. If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable and/or crash. This can be partially remedied by the use of smart pointers, but these add overhead and complexity. Note that garbage collection does not prevent "logical" memory leaks, i.e., those where the memory is still referenced but never used.
+
Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java.
+
Java does not support C/C++ style pointer arithmetic, where object addresses and unsigned integers (usually long integers) can be used interchangeably. This allows the garbage collector to relocate referenced objects and ensures type safety and security.
+
As in C++ and some other object-oriented languages, variables of Java's primitive data types are either stored directly in fields (for objects) or on the stack (for methods) rather than on the heap, as is commonly true for non-primitive data types (but see escape analysis). This was a conscious decision by Java's designers for performance reasons.
The syntax of Java is largely influenced by C++. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language.[15] All code is written inside classes, and every data item is an object, with the exception of the primitive data types, i.e. integers, floating-point numbers, boolean values, and characters, which are not objects for performance reasons. Java reuses some popular aspects of C++ (such as printf() method).
Java uses comments similar to those of C++. There are three different styles of comments: a single line style marked with two slashes (//), a multiple line style opened with /* and closed with */, and the Javadoc commenting style opened with /** and closed with */. The Javadoc style of commenting allows the user to run the Javadoc executable to create documentation for the program.
+
Example:
+
+
+// This is an example of a single line comment using two slashes
+
+/* This is an example of a multiple line comment using the slash and asterisk.
+ This type of comment can be used to hold a lot of information or deactivate
+ code, but it is very important to remember to close the comment. */
+
+packagefibsandlies;
+importjava.util.HashMap;
+
+/**
+ * This is an example of a Javadoc comment; Javadoc can compile documentation
+ * from this text. Javadoc comments must immediately precede the class, method, or field being documented.
+ */
+publicclassFibCalculatorextendsFibonacciimplementsCalculator{
+ privatestaticMap<Integer,Integer>memoized=newHashMap<Integer,Integer>();
+
+ /*
+ * The main method written as follows is used by the JVM as a starting point for the program.
+ */
+ publicstaticvoidmain(String[]args){
+ memoized.put(1,1);
+ memoized.put(2,1);
+ System.out.println(fibonacci(12));//Get the 12th Fibonacci number and print to console
+ }
+
+ /**
+ * An example of a method written in Java, wrapped in a class.
+ * Given a non-negative number FIBINDEX, returns
+ * the Nth Fibonacci number, where N equals FIBINDEX.
+ * @param fibIndex The index of the Fibonacci number
+ * @return The Fibonacci number
+ */
+ publicstaticintfibonacci(intfibIndex){
+ if(memoized.containsKey(fibIndex)){
+ returnmemoized.get(fibIndex);
+ }else{
+ intanswer=fibonacci(fibIndex-1)+fibonacci(fibIndex-2);
+ memoized.put(fibIndex,answer);
+ returnanswer;
+ }
+ }
+}
+
+classHelloWorldApp{
+ publicstaticvoidmain(String[]args){
+ System.out.println("Hello World!");// Prints the string to the console.
+ }
+}
+
+
Source files must be named after the public class they contain, appending the suffix .java, for example, HelloWorldApp.java. It must first be compiled into bytecode, using a Java compiler, producing a file named HelloWorldApp.class. Only then can it be executed, or "launched". The Java source file may only contain one public class, but it can contain multiple classes with other than public access and any number of public inner classes. When the source file contains multiple classes, make one class "public" and name the source file with that public class name.
+
A class that is not declared public may be stored in any .java file. The compiler will generate a class file for each class defined in the source file. The name of the class file is the name of the class, with .class appended. For class file generation, anonymous classes are treated as if their name were the concatenation of the name of their enclosing class, a $, and an integer.
+
The keywordpublic denotes that a method can be called from code in other classes, or that a class may be used by classes outside the class hierarchy. The class hierarchy is related to the name of the directory in which the .java file is located. This is called an access level modifier. Other access level modifiers include the keywords private and protected.
+
The keyword static in front of a method indicates a static method, which is associated only with the class and not with any specific instance of that class. Only static methods can be invoked without a reference to an object. Static methods cannot access any class members that are not also static. Methods that are not designated static are instance methods, and require a specific instance of a class to operate.
+
The keyword void indicates that the main method does not return any value to the caller. If a Java program is to exit with an error code, it must call System.exit() explicitly.
+
The method name "main" is not a keyword in the Java language. It is simply the name of the method the Java launcher calls to pass control to the program. Java classes that run in managed environments such as applets and Enterprise JavaBeans do not use or need a main() method. A Java program may contain multiple classes that have main methods, which means that the VM needs to be explicitly told which class to launch from.
+
The main method must accept an array of String objects. By convention, it is referenced as args although any other legal identifier name can be used. Since Java 5, the main method can also use variable arguments, in the form of public static void main(String... args), allowing the main method to be invoked with an arbitrary number of String arguments. The effect of this alternate declaration is semantically identical (the args parameter is still an array of String objects), but it allows an alternative syntax for creating and passing the array.
+
The Java launcher launches Java by loading a given class (specified on the command line or as an attribute in a JAR) and starting its public static void main(String[]) method. Stand-alone programs must declare this method explicitly. The String[] args parameter is an array of String objects containing any arguments passed to the class. The parameters to main are often passed by means of a command line.
+
Printing is part of a Java standard library: The System class defines a public static field called out. The out object is an instance of the PrintStream class and provides many methods for printing data to standard out, including println(String) which also appends a new line to the passed string.
+
The string "Hello World!" is automatically converted to a String object by the compiler.
+
Comprehensive example
+
+
+
+
+
+
+
+
+
This section has multiple issues. Please help improve it or discuss these issues on the talk page.
+// OddEven.java
+importjavax.swing.JOptionPane;
+
+publicclassOddEven{
+
+ privateintuserInput;// a whole number("int" means integer)
+
+ /**
+ * This is the constructor method. It gets called when an object of the OddEven type
+ * is being created.
+ */
+ publicOddEven(){
+ /*
+ * In most Java programs constructors can initialize objects with default values, or create
+ * other objects that this object might use to perform its functions. In some Java programs, the
+ * constructor may simply be an empty function if nothing needs to be initialized prior to the
+ * functioning of the object. In this program's case, an empty constructor would suffice.
+ * A constructor must exist; however, if the user doesn't put one in then the compiler
+ * will create an empty one.
+ */
+ }
+
+ /**
+ * This is the main method. It gets called when this class is run through a Java interpreter.
+ * @param args command line arguments (unused)
+ */
+ publicstaticvoidmain(finalString[]args){
+ /*
+ * This line of code creates a new instance of this class called "number" (also known as an
+ * Object) and initializes it by calling the constructor. The next line of code calls
+ * the "showDialog()" method, which brings up a prompt to ask you for a number.
+ */
+ OddEvennumber=newOddEven();
+ number.showDialog();
+ }
+
+ publicvoidshowDialog(){
+ /*
+ * "try" makes sure nothing goes wrong. If something does,
+ * the interpreter skips to "catch" to see what it should do.
+ */
+ try{
+ /*
+ * The code below brings up a JOptionPane, which is a dialog box
+ * The String returned by the "showInputDialog()" method is converted into
+ * an integer, making the program treat it as a number instead of a word.
+ * After that, this method calls a second method, calculate() that will
+ * display either "Even" or "Odd."
+ */
+ userInput=Integer.parseInt(JOptionPane.showInputDialog("Please enter a number."));
+ calculate();
+ }catch(finalNumberFormatExceptione){
+ /*
+ * Getting in the catch block means that there was a problem with the format of
+ * the number. Probably some letters were typed in instead of a number.
+ */
+ System.err.println("ERROR: Invalid input. Please type in a numerical value.");
+ }
+ }
+
+ /**
+ * When this gets called, it sends a message to the interpreter.
+ * The interpreter usually shows it on the command prompt (For Windows users)
+ * or the terminal (For *nix users).(Assuming it's open)
+ */
+ privatevoidcalculate(){
+ if((userInput%2)==0){
+ JOptionPane.showMessageDialog(null,"Even");
+ }else{
+ JOptionPane.showMessageDialog(null,"Odd");
+ }
+ }
+}
+
The OddEven class declares a single privatefield of type int named userInput. Every instance of the OddEven class has its own copy of the userInput field. The private declaration means that no other class can access (read or write) the userInput field.
+
OddEven() is a publicconstructor. Constructors have the same name as the enclosing class they are declared in, and unlike a method, have no return type. A constructor is used to initialize an object that is a newly created instance of the class.
+
The calculate() method is declared without the static keyword. This means that the method is invoked using a specific instance of the OddEven class. (The reference used to invoke the method is passed as an undeclared parameter of type OddEven named this.) The method tests the expression userInput % 2 == 0 using the if keyword to see if the remainder of dividing the userInput field belonging to the instance of the class by two is zero. If this expression is true, then it prints Even; if this expression is false it prints Odd. (The calculate method can be equivalently accessed as this.calculate and the userInput field can be equivalently accessed as this.userInput, which both explicitly use the undeclared this parameter.)
+
OddEven number = new OddEven(); declares a local object reference variable in the main method named number. This variable can hold a reference to an object of type OddEven. The declaration initializes number by first creating an instance of the OddEven class, using the new keyword and the OddEven() constructor, and then assigning this instance to the variable.
+
The statement number.showDialog(); calls the calculate method. The instance of OddEven object referenced by the numberlocal variable is used to invoke the method and passed as the undeclared this parameter to the calculate method.
+
userInput = Integer.parseInt(JOptionPane.showInputDialog("Please Enter A Number")); is a statement that converts the type of String to the primitive data typeint by using a utility function in the primitive wrapper classInteger.
The import statements direct the Java compiler to include the javax.swing.JApplet and java.awt.Graphics classes in the compilation. The import statement allows these classes to be referenced in the source code using the simple class name (i.e. JApplet) instead of the fully qualified class name (FQCN, i.e. javax.swing.JApplet).
+
The Hello class extends (subclasses) the JApplet (Java Applet) class; the JApplet class provides the framework for the host application to display and control the lifecycle of the applet. The JApplet class is a JComponent (Java Graphical Component) which provides the applet with the capability to display a graphical user interface (GUI) and respond to user events.
+
The Hello class overrides the paintComponent(Graphics) method (additionally indicated with the annotation, supported as of JDK 1.5, Override) inherited from the Containersuperclass to provide the code to display the applet. The paintComponent() method is passed a Graphics object that contains the graphic context used to display the applet. The paintComponent() method calls the graphic context drawString(String, int, int) method to display the "Hello, world!" string at a pixel offset of (65, 95) from the upper-left corner in the applet's display.
+
+
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
+"http://www.w3.org/TR/html4/strict.dtd">
+<!-- Hello.html -->
+<html>
+ <head>
+ <title>Hello World Applet</title>
+ </head>
+ <body>
+ <appletcode="Hello.class"width="200"height="200">
+ </applet>
+ </body>
+</html>
+
+
An applet is placed in an HTML document using the <applet>HTML element. The applet tag has three attributes set: code="Hello" specifies the name of the JApplet class and width="200" height="200" sets the pixel width and height of the applet. Applets may also be embedded in HTML using either the object or embed element,[45] although support for these elements by web browsers is inconsistent.[46] However, the applet tag is deprecated, so the object tag is preferred where supported.
+
The host application, typically a Web browser, instantiates the Hello applet and creates an AppletContext for the applet. Once the applet has initialized itself, it is added to the AWT display hierarchy. The paintComponent() method is called by the AWT event dispatching thread whenever the display needs the applet to draw itself.
Java Servlet technology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets are server-side Java EE components that generate responses (typically HTML pages) to requests (typically HTTP requests) from clients. A servlet can almost be thought of as an applet that runs on the server side—without a face.
The import statements direct the Java compiler to include all the public classes and interfaces from the java.io and javax.servlet packages in the compilation. Packages make Java well suited for large scale applications.
+
The Hello class extends the GenericServlet class; the GenericServlet class provides the interface for the server to forward requests to the servlet and control the servlet's lifecycle.
+
The Hello class overrides the service(ServletRequest, ServletResponse) method defined by the Servletinterface to provide the code for the service request handler. The service() method is passed: a ServletRequest object that contains the request from the client and a ServletResponse object used to create the response returned to the client. The service() method declares that it throws the exceptionsServletException and IOException if a problem prevents it from responding to the request.
+
The setContentType(String) method in the response object is called to set the MIME content type of the returned data to "text/html". The getWriter() method in the response returns a PrintWriter object that is used to write the data that is sent to the client. The println(String) method is called to write the "Hello, world!" string to the response and then the close() method is called to close the print writer, which causes the data that has been written to the stream to be returned to the client.
JavaServer Pages (JSP) are server-side Java EE components that generate responses, typically HTML pages, to HTTP requests from clients. JSPs embed Java code in an HTML page by using the special delimiters<% and %>. A JSP is compiled to a Java servlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response.
Swing is a graphical user interface library for the Java SE platform. It is possible to specify a different look and feel through the pluggable look and feel system of Swing. Clones of Windows, GTK+ and Motif are supplied by Sun. Apple also provides an Aqua look and feel for Mac OS X. Where prior implementations of these looks and feels may have been considered lacking, Swing in Java SE 6 addresses this problem by using more native GUI widget drawing routines of the underlying platforms.
+
This example Swing application creates a single window with "Hello, world!" inside:
The first import includes all the public classes and interfaces from the javax.swing package.
+
The Hello class extends the JFrame class; the JFrame class implements a window with a title bar and a close control.
+
The Hello()constructor initializes the frame by first calling the superclass constructor, passing the parameter "hello", which is used as the window's title. It then calls the setDefaultCloseOperation(int) method inherited from JFrame to set the default operation when the close control on the title bar is selected to WindowConstants.EXIT_ON_CLOSE – this causes the JFrame to be disposed of when the frame is closed (as opposed to merely hidden), which allows the Java virtual machine to exit and the program to terminate. Next, a JLabel is created for the string "Hello, world!" and the add(Component) method inherited from the Container superclass is called to add the label to the frame. The pack() method inherited from the Window superclass is called to size the window and lay out its contents.
+
The main() method is called by the Java virtual machine when the program starts. It instantiates a new Hello frame and causes it to be displayed by calling the setVisible(boolean) method inherited from the Component superclass with the boolean parameter true. Once the frame is displayed, exiting the main method does not cause the program to terminate because the AWT event dispatching thread remains active until all of the Swing top-level windows have been disposed.
In 2004, generics were added to the Java language, as part of J2SE 5.0. Prior to the introduction of generics, each variable declaration had to be of a specific type. For container classes, for example, this is a problem because there is no easy way to create a container that accepts only specific types of objects. Either the container operates on all subtypes of a class or interface, usually Object, or a different container class has to be created for each contained class. Generics allow compile-time type checking without having to create many container classes, each containing almost identical code. In addition to enabling more efficient code, certain runtime exceptions are converted to compile-time errors, a characteristic known as type safety.
Criticisms directed at Java include the implementation of generics,[47] speed,[48] the handling of unsigned numbers,[49] the implementation of floating-point arithmetic,[50] and a history of security vulnerabilities in the primary Java VM implementation HotSpot.[51]
+
Use on unofficial software platforms
+
The Java programming language requires the presence of a software platform in order for compiled programs to be executed. A well-known unofficial Java-like software platform is the Android software platform, which allows the use of Java 6 and some Java 7 features, uses a different standard library (Apache Harmony reimplementation), different bytecode language and different virtual machine, and is designed for low-memory devices such as smartphones and tablet computers.
+
+
+
+
+The Android operating system makes extensive use of Java-related technology
Google and Android, Inc. have chosen to use Java as a key pillar in the creation of the Android operating system, an open sourcemobile operating system. Although the Android operating system, built on the Linux kernel, was written largely in C, the Android SDK uses the Java language as the basis for Android applications. However, Android does not use the Java virtual machine, instead using Java bytecode as an intermediate step and ultimately targeting Android's own Dalvik virtual machine or more recently Android Runtime which actually compiles applications to native machine code upon installation.
+
Android also does not provide the full Java SE standard library, although the Android class library does include an independent implementation of a large subset of it. This led to a legal dispute between Oracle and Google. On May 7, 2012, a San Francisco jury found that if APIs could be copyrighted, then Google had infringed Oracle's copyrights by the use of Java in Android devices.[52] District Judge William Haskell Alsup ruled on May 31, 2012, that APIs cannot be copyrighted,[53] but this was reversed by the United States Court of Appeals for the Federal Circuit in May 2014.[54][55][56]
The Java Class Library is the standard library, developed to support application development in Java. It is controlled by Sun Microsystems in cooperation with others through the Java Community Process program. Companies or individuals participating in this process can influence the design and development of the APIs. This process has been a subject of controversy.[when?] The class library contains features such as:
The (heavyweight, or native) Abstract Window Toolkit (AWT), which provides GUI components, the means for laying out those components and the means for handling events from those components
+
The (lightweight) Swing libraries, which are built on AWT but provide (non-native) implementations of the AWT widgetry
A platform dependent implementation of the Java virtual machine that is the means by which the bytecodes of the Java libraries and third party applications are executed
+
Plugins, which enable applets to be run in web browsers
Javadoc is a comprehensive documentation system, created by Sun Microsystems, used by many Java developers[by whom?]. It provides developers with an organized system for documenting their code. Javadoc comments have an extra asterisk at the beginning, i.e. the delimiters are /** and */, whereas the normal multi-line comments in Java are set off with the delimiters /* and */.[60]
Sun has defined and supports four editions of Java targeting different application environments and segmented many of its APIs so that they belong to one of the platforms. The platforms are:
The classes in the Java APIs are organized into separate groups called packages. Each package contains a set of related interfaces, classes and exceptions. Refer to the separate platforms for a description of the packages available.[relevant to this section?– discuss]
+
Sun also provided an edition called PersonalJava that has been superseded by later, standards-based Java ME configuration-profile pairings.
^Niklaus Wirth stated on a number of public occasions, e.g. in a lecture at the Polytechnic Museum, Moscow in September, 2005 (several independent first-hand accounts in Russian exist, e.g. one with an audio recording: Filippova, Elena (September 22, 2005). "Niklaus Wirth's lecture at the Polytechnic Museum in Moscow".), that the Sun Java design team licensed the Oberon compiler sources a number of years prior to the release of Java and examined it: a (relative) compactness, type safety, garbage collection, no multiple inheritance for classes – all these key overall design features are shared by Java and Oberon.
+
^Patrick Naughton cites Objective-C as a strong influence on the design of the Java programming language, stating that notable direct derivatives include Java interfaces (derived from Objective-C's protocol) and primitive wrapper classes. [3]
+
^TechMetrix Research (1999). "History of Java"(PDF). Java Application Servers Report. The project went ahead under the name "green" and the language was based on an old model of UCSD Pascal, which makes it possible to generate interpretive code
^In the summer of 1996, Sun was designing the precursor to what is now the event model of the AWT and the JavaBeans TM component architecture. Borland contributed greatly to this process. We looked very carefully at Delphi Object Pascal and built a working prototype of bound method references in order to understand their interaction with the Java programming language and its APIs.White Paper About Microsoft's "Delegates"
^McMillan, Robert (2013-08-01). "Is Java Losing Its Mojo?". wired.com. Java is on the wane, at least according to one outfit that keeps on eye on the ever-changing world of computer programming languages. For more than a decade, it has dominated the Tiobe Programming Community Index – a snapshot of software developer enthusiasm that looks at things like internet search results to measure how much buzz different languages have. But lately, Java has been slipping.
+
^RedMonk Index on redmonk.com (Stephen O'Grady, January 2015)
^"Oracle and Java". oracle.com. Oracle Corporation. Retrieved 2010-08-23. Oracle has been a leading and substantive supporter of Java since its emergence in 1995 and takes on the new role as steward of Java technology with a relentless commitment to fostering a community of participation and transparency.
Knowledge can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); it can be more or less formal or systematic.[1] In philosophy, the study of knowledge is called epistemology; the philosopher Plato famously defined knowledge as "justified true belief", though "well-justified true belief" is more complete as it accounts for the Gettier problems. However, several definitions of knowledge and theories to explain it exist.
+
Knowledge acquisition involves complex cognitive processes: perception, communication, and reasoning; while knowledge is also said to be related to the capacity of acknowledgment in human beings.[2]
The eventual demarcation of philosophy from science was made possible by the notion that philosophy's core was "theory of knowledge," a theory distinct from the sciences because it was their foundation... Without this idea of a "theory of knowledge," it is hard to imagine what "philosophy" could have been in the age of modern science.
+
— Richard Rorty, Philosophy and the Mirror of Nature
+
+
The definition of knowledge is a matter of ongoing debate among philosophers in the field of epistemology. The classical definition, described but not ultimately endorsed by Plato,[3] specifies that a statement must meet three criteria in order to be considered knowledge: it must be justified, true, and believed. Some claim that these conditions are not sufficient, as Gettier case examples allegedly demonstrate. There are a number of alternatives proposed, including Robert Nozick's arguments for a requirement that knowledge 'tracks the truth' and Simon Blackburn's additional requirement that we do not want to say that those who meet any of these conditions 'through a defect, flaw, or failure' have knowledge. Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth.[4]
+
In contrast to this approach, Ludwig Wittgenstein observed, following Moore's paradox, that one can say "He believes it, but it isn't so," but not "He knows it, but it isn't so."[5] He goes on to argue that these do not correspond to distinct mental states, but rather to distinct ways of talking about conviction. What is different here is not the mental state of the speaker, but the activity in which they are engaged. For example, on this account, to know that the kettle is boiling is not to be in a particular state of mind, but to perform a particular task with the statement that the kettle is boiling. Wittgenstein sought to bypass the difficulty of definition by looking to the way "knowledge" is used in natural languages. He saw knowledge as a case of a family resemblance. Following this idea, "knowledge" has been reconstructed as a cluster concept that points out relevant features but that is not adequately captured by any definition.[6]
Symbolic representations can be used to indicate meaning and can be thought of as a dynamic process. Hence the transfer of the symbolic representation can be viewed as one ascription process whereby knowledge can be transferred. Other forms of communication include observation and imitation, verbal exchange, and audio and video recordings. Philosophers of language and semioticians construct and analyze theories of knowledge transfer or communication.
+
While many would agree that one of the most universal and significant tools for the transfer of knowledge is writing and reading (of many kinds), argument over the usefulness of the written word exists nonetheless, with some scholars skeptical of its impact on societies. In his collection of essays Technopoly, Neil Postman demonstrates the argument against the use of writing through an excerpt from Plato's work Phaedrus (Postman, Neil (1992) Technopoly, Vintage, New York, pp 73). In this excerpt, the scholar Socrates recounts the story of Thamus, the Egyptian king and Theuth the inventor of the written word. In this story, Theuth presents his new invention "writing" to King Thamus, telling Thamus that his new invention "will improve both the wisdom and memory of the Egyptians" (Postman, Neil (1992) Technopoly, Vintage, New York, pp 74). King Thamus is skeptical of this new invention and rejects it as a tool of recollection rather than retained knowledge. He argues that the written word will infect the Egyptian people with fake knowledge as they will be able to attain facts and stories from an external source and will no longer be forced to mentally retain large quantities of knowledge themselves (Postman, Neil (1992) Technopoly, Vintage, New York,pp 74).
+
Classical early modern theories of knowledge, especially those advancing the influential empiricism of the philosopher John Locke, were based implicitly or explicitly on a model of the mind which likened ideas to words.[7] This analogy between language and thought laid the foundation for a graphic conception of knowledge in which the mind was treated as a table (a container of content) that had to be stocked with facts reduced to letters, numbers or symbols. This created a situation in which the spatial alignment of words on the page carried great cognitive weight, so much so that educators paid very close attention to the visual structure of information on the page and in notebooks.[8]
+
Media theorists like Andrew Robinson emphasise that the visual depiction of knowledge in the modern world was often seen as being 'truer' than oral knowledge. This plays into a longstanding analytic notion in the Western intellectual tradition in which verbal communication is generally thought to lend itself to the spread of falsehoods as much as written communication. It is harder to preserve records of what was said or who originally said it – usually neither the source nor the content can be verified. Gossip and rumors are examples prevalent in both media. As to the value of writing, the extent of human knowledge is now so great, and the people interested in a piece of knowledge so separated in time and space, that writing is considered central to capturing and sharing it.
+
Major libraries today can have millions of books of knowledge (in addition to works of fiction). It is only recently that audio and video technology for recording knowledge have become available and the use of these still requires replay equipment and electricity. Verbal teaching and handing down of knowledge is limited to those who would have contact with the transmitter or someone who could interpret written work. Writing is still the most available and most universal of all forms of recording and transmitting knowledge. It stands unchallenged as mankind's primary technology of knowledge transfer down through the ages and to all cultures and languages of the world.[citation needed][disputed– discuss]
Situated knowledge is knowledge specific to a particular situation. It is a term coined by Donna Haraway as an extension of the feminist approaches of "successor science" suggested by Sandra Harding, one which "offers a more adequate, richer, better account of a world, in order to live in it well and in critical, reflexive relation to our own as well as others' practices of domination and the unequal parts of privilege and oppression that makes up all positions."[9] This situation partially transforms science into a narrative, which Arturo Escobar explains as, "neither fictions nor supposed facts." This narrative of situation is historical textures woven of fact and fiction, and as Escobar explains further, "even the most neutral scientific domains are narratives in this sense," insisting that rather than a purpose dismissing science as a trivial matter of contingency, "it is to treat (this narrative) in the most serious way, without succumbing to its mystification as 'the truth' or to the ironic skepticism common to many critiques."[10]
+
Haraway's argument stems from the limitations of the human perception, as well as the overemphasis of the sense of vision in science. According to Haraway, vision in science has been, "used to signify a leap out of the marked body and into a conquering gaze from nowhere." This is the "gaze that mythically inscribes all the marked bodies, that makes the unmarked category claim the power to see and not be seen, to represent while escaping representation."[9] This causes a limitation of views in the position of science itself as a potential player in the creation of knowledge, resulting in a position of "modest witness". This is what Haraway terms a "god trick", or the aforementioned representation while escaping representation.[11] In order to avoid this, "Haraway perpetuates a tradition of thought which emphasizes the importance of the subject in terms of both ethical and political accountability".[12]
+
Some methods of generating knowledge, such as trial and error, or learning from experience, tend to create highly situational knowledge. One of the main attributes of the scientific method is that the theories it generates are much less situational than knowledge gained by other methods.[citation needed] Situational knowledge is often embedded in language, culture, or traditions. This integration of situational knowledge is an allusion to the community, and its attempts at collecting subjective perspectives into an embodiment "of views from somewhere." [9]
+
Knowledge generated through experience is called knowledge "a posteriori", meaning afterwards. The pure existence of a term like "a posteriori" means this also has a counterpart. In this case, that is knowledge "a priori", meaning before. The knowledge prior to any experience means that there are certain "assumptions" that one takes for granted. For example, if you are being told about a chair, it is clear to you that the chair is in space, that it is 3D. This knowledge is not knowledge that one can "forget", even someone suffering from amnesia experiences the world in 3D.[citation needed]
+
Even though Haraway's arguments are largely based on feminist studies,[9] this idea of different worlds, as well as the skeptic stance of situated knowledge is present in the main arguments of post-structuralism. Fundamentally, both argue the contingency of knowledge on the presence of history; power, and geography, as well as the rejection of universal rules or laws or elementary structures; and the idea of power as an inherited trait of objectification.[13]
One discipline of epistemology focuses on partial knowledge. In most cases, it is not possible to understand an information domain exhaustively; our knowledge is always incomplete or partial. Most real problems have to be solved by taking advantage of a partial understanding of the problem context and problem data, unlike the typical math problems one might solve at school, where all data is given and one is given a complete understanding of formulas necessary to solve them.[citation needed]
+
This idea is also present in the concept of bounded rationality which assumes that in real life situations people often have a limited amount of information and make decisions accordingly.
+
Intuition is the ability to acquire partial knowledge without inference or the use of reason.[14] An individual may "know" about a situation and be unable to explain the process that led to their knowledge.
The development of the scientific method has made a significant contribution to how knowledge of the physical world and its phenomena is acquired.[15] To be termed scientific, a method of inquiry must be based on gathering observable and measurableevidence subject to specific principles of reasoning and experimentation.[16] The scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.[17] Science, and the nature of scientific knowledge have also become the subject of Philosophy. As science itself has developed, knowledge has developed a broader usage which has been developing within biology/psychology—discussed elsewhere as meta-epistemology, or genetic epistemology, and to some extent related to "theory of cognitive development". Note that "epistemology" is the study of knowledge and how it is acquired. Science is "the process used everyday to logically complete thoughts through inference of facts determined by calculated experiments." Sir Francis Bacon was critical in the historical development of the scientific method; his works established and popularized an inductive methodology for scientific inquiry. His famous aphorism, "knowledge is power", is found in the Meditations Sacrae (1597).[18]
+
Until recent times, at least in the Western tradition, it was simply taken for granted that knowledge was something possessed only by humans — and probably adult humans at that. Sometimes the notion might stretch to (ii) Society-as-such, as in (e.g.) "the knowledge possessed by the Coptic culture" (as opposed to its individual members), but that was not assured either. Nor was it usual to consider unconscious knowledge in any systematic way until this approach was popularized by Freud.[19]
+
Other biological domains where "knowledge" might be said to reside, include: (iii) the immune system, and (iv) in the DNA of the genetic code. See the list of four "epistemological domains": Popper, (1975);[20] and Traill (2008:[21] Table S, page 31)—also references by both to Niels Jerne.
+
Such considerations seem to call for a separate definition of "knowledge" to cover the biological systems. For biologists, knowledge must be usefully available to the system, though that system need not be conscious. Thus the criteria seem to be:
+
+
The system should apparently be dynamic and self-organizing (unlike a mere book on its own).
+
The knowledge must constitute some sort of representation of "the outside world",[22] or ways of dealing with it (directly or indirectly).
+
Some way must exist for the system to access this information quickly enough for it to be useful.
+
+
Scientific knowledge may not involve a claim to certainty, maintaining skepticism means that a scientist will never be absolutely certain when they are correct and when they are not. It is thus an irony of proper scientific method that one must doubt even when correct, in the hopes that this practice will lead to greater convergence on the truth in general.[23]
In Gnosticism, divine knowledge or gnosis is hoped to be attained.
+
विद्या दान (Vidya Daan) i.e. knowledge sharing is a major part of Daan, a tenet of all Dharmic Religions.[25]Hindu Scriptures present two kinds of knowledge, Paroksh Gyan and Prataksh Gyan. Paroksh Gyan (also spelled Paroksha-Jnana) is secondhand knowledge: knowledge obtained from books, hearsay, etc. Prataksh Gyan (also spelled Prataksha-Jnana) is the knowledge borne of direct experience, i.e., knowledge that one discovers for oneself.[26]Jnana yoga ("path of knowledge") is one of three main types of yoga expounded by Krishna in the Bhagavad Gita. (It is compared and contrasted with Bhakti Yoga and Karma yoga.)
+
In Islam, knowledge (Arabic: علم, ʿilm) is given great significance. "The Knowing" (al-ʿAlīm) is one of the 99 names reflecting distinct attributes of God. The Qur'an asserts that knowledge comes from God (2:239) and various hadith encourage the acquisition of knowledge. Muhammad is reported to have said "Seek knowledge from the cradle to the grave" and "Verily the men of knowledge are the inheritors of the prophets". Islamic scholars, theologians and jurists are often given the title alim, meaning "knowledgable".[citation needed]
+
In Jewish tradition, knowledge (Hebrew: דעת da'ath) is considered one of the most valuable traits a person can acquire. Observant Jews recite three times a day in the Amidah "Favor us with knowledge, understanding and discretion that come from you. Exalted are you, Existent-One, the gracious giver of knowledge." The Tanakh states, "A wise man gains power, and a man of knowledge maintains power", and "knowledge is chosen above gold".
+
As a measure of religiosity (in sociology of religion)[edit]
+
According to the sociologist Mervin Verbit, knowledge may be understood as one of the key components of religiosity. Religious knowledge itself may be broken down into four dimensions:
+
+
content
+
frequency
+
intensity
+
centrality
+
+
The content of one's religious knowledge may vary from person to person, as will the degree to which it may occupy the person's mind (frequency), the intensity of the knowledge, and the centrality of the information (in that religious tradition, or to that individual).[27][28][29]
^Stanley Cavell, "Knowing and Acknowledging", Must We Mean What We Say? (Cambridge University Press, 2002), 238–266.
+
^In Plato's Theaetetus, Socrates and Theaetetus discuss three definitions of knowledge: knowledge as nothing but perception, knowledge as true judgment, and, finally, knowledge as a true judgment with an account. Each of these definitions is shown to be unsatisfactory.
^Gottschalk-Mazouz, N. (2008): "Internet and the flow of knowledge," in: Hrachovec, H.; Pichler, A. (Hg.): Philosophy of the Information Society. Proceedings of the 30. International Ludwig Wittgenstein Symposium Kirchberg am Wechsel, Austria 2007. Volume 2, Frankfurt, Paris, Lancaster, New Brunswik: Ontos, S. 215–232. http://sammelpunkt.philo.at:8080/2022/1/Gottschalk-Mazouz.pdf
^ abcd"Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective". Haraway, Donna. Feminist Studies Vol. 14, No. 3. pp. 575–599. 1988. Cite error: Invalid <ref> tag; name "Haraway1988" defined multiple times with different content (see the help page).
+
^"Introduction: Development and the Anthropology of Modernity". Escobar, Arturo. Encountering Development: The Making and Unmaking of the Third World.
^There is quite a good case for this exclusive specialization used by philosophers, in that it allows for in-depth study of logic-procedures and other abstractions which are not found elsewhere. However this may lead to problems whenever the topic spills over into those excluded domains—e.g. when Kant (following Newton) dismissed Space and Time as axiomatically "transcendental" and "a priori" — a claim later disproved by Piaget's clinical studies. It also seems likely that the vexed problem of "infinite regress" can be largely (but not completely) solved by proper attention to how unconscious concepts are actually developed, both during infantile learning and as inherited "pseudo-transcendentals" inherited from the trial-and-error of previous generations. See also "Tacit knowledge".
+
+
Piaget, J., and B.Inhelder (1927 / 1969). The child's conception of time. Routledge & Kegan Paul: London.
+
Piaget, J., and B.Inhelder (1948 / 1956). The child's conception of space. Routledge & Kegan Paul: London.
+
+
+
^Popper, K.R. (1975). "The rationality of scientific revolutions"; in Rom Harré (ed.), Problems of Scientific Revolution: Scientific Progress and Obstacles to Progress in the Sciences. Clarendon Press: Oxford.
^This "outside world" could include other subsystems within the same organism—e.g. different "mental levels" corresponding to different Piagetian stages. See Theory of cognitive development.
^Swami Krishnananda. "Chapter 7". The Philosophy of the Panchadasi. The Divine Life Society. Retrieved 2008-07-05.
+
^Verbit, M. F. (1970). The components and dimensions of religious behavior: Toward a reconceptualization of religiosity. American mosaic, 24, 39.
+
^Küçükcan, T. (2010). Multidimensional Approach to Religion: a way of looking at religious phenomena. Journal for the Study of Religions and Ideologies, 4(10), 60–70.
Mathematicians seek out patterns[9][10] and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof. When mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity for as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry.
Galileo Galilei (1564–1642) said, "The universe cannot be read until we have learned the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth."[12]Carl Friedrich Gauss (1777–1855) referred to mathematics as "the Queen of the Sciences".[13]Benjamin Peirce (1809–1880) called mathematics "the science that draws necessary conclusions".[14] David Hilbert said of mathematics: "We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise."[15]Albert Einstein (1879–1955) stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[16]
+
Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered.[17]
The history of mathematics can be seen as an ever-increasing series of abstractions. The first abstraction, which is shared by many animals,[18] was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely quantity of their members.
As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time – days, seasons, years.[19]
+
Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra and geometry for taxation and other financial calculations, for building and construction, and for astronomy.[20] The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns and the recording of time.
+Persian mathematician Al-Khwarizmi ( c. 780 - c. 850 ), the inventor of the Algebra.
+
+
+
During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics: most of them include the contributions from Persian mathematicians such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī.
+
Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."[22]
+
Etymology
+
The word mathematics comes from the Greek μάθημα (máthēma), which, in the ancient Greek language, means "that which is learnt",[23] "what one gets to know", hence also "study" and "science", and in modern Greek just "lesson". The word máthēma is derived from μανθάνω (manthano), while the modern Greek equivalent is μαθαίνω (mathaino), both of which mean "to learn". In Greece, the word for "mathematics" came to have the narrower and more technical meaning "mathematical study" even in Classical times.[24] Its adjective is μαθηματικός (mathēmatikós), meaning "related to learning" or "studious", which likewise further came to mean "mathematical". In particular, μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), Latin: ars mathematica, meant "the mathematical art".
+
In Latin, and in English until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations: a particularly notorious one is Saint Augustine's warning that Christians should beware of mathematici meaning astrologers, which is sometimes mistranslated as a condemnation of mathematicians.[25]
+
The apparent plural form in English, like the French plural form les mathématiques (and the less commonly used singular derivative la mathématique), goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural τα μαθηματικά (ta mathēmatiká), used by Aristotle (384–322 BC), and meaning roughly "all things mathematical"; although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from the Greek.[26] In English, the noun mathematics takes singular verb forms. It is often shortened to maths or, in English-speaking North America, math.[27]
+Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World
+
+
+
Aristotle defined mathematics as "the science of quantity", and this definition prevailed until the 18th century.[28] Starting in the 19th century, when the study of mathematics increased in rigor and began to address abstract topics such as group theory and projective geometry, which have no clear-cut relation to quantity and measurement, mathematicians and philosophers began to propose a variety of new definitions.[29] Some of these definitions emphasize the deductive character of much of mathematics, some emphasize its abstractness, some emphasize certain topics within mathematics. Today, no consensus on the definition of mathematics prevails, even among professionals.[7] There is not even consensus on whether mathematics is an art or a science.[8] A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable.[7] Some just say, "Mathematics is what mathematicians do."[7]
+
Three leading types of definition of mathematics are called logicist, intuitionist, and formalist, each reflecting a different philosophical school of thought.[30] All have severe problems, none has widespread acceptance, and no reconciliation seems possible.[30]
+
An early definition of mathematics in terms of logic was Benjamin Peirce's "the science that draws necessary conclusions" (1870).[31] In the Principia Mathematica, Bertrand Russell and Alfred North Whitehead advanced the philosophical program known as logicism, and attempted to prove that all mathematical concepts, statements, and principles can be defined and proven entirely in terms of symbolic logic. A logicist definition of mathematics is Russell's "All Mathematics is Symbolic Logic" (1903).[32]
+
Intuitionist definitions, developing from the philosophy of mathematician L.E.J. Brouwer, identify mathematics with certain mental phenomena. An example of an intuitionist definition is "Mathematics is the mental activity which consists in carrying out constructs one after the other."[30] A peculiarity of intuitionism is that it rejects some mathematical ideas considered valid according to other definitions. In particular, while other philosophies of mathematics allow objects that can be proven to exist even though they cannot be constructed, intuitionism allows only mathematical objects that one can actually construct.
+
Formalist definitions identify mathematics with its symbols and the rules for operating on them. Haskell Curry defined mathematics simply as "the science of formal systems".[33] A formal system is a set of symbols, or tokens, and some rules telling how the tokens may be combined into formulas. In formal systems, the word axiom has a special meaning, different from the ordinary meaning of "a self-evident truth". In formal systems, an axiom is a combination of tokens that is included in a given formal system without needing to be derived using the rules of the system.
Gauss referred to mathematics as "the Queen of the Sciences".[13] In the original Latin Regina Scientiarum, as well as in GermanKönigin der Wissenschaften, the word corresponding to science means a "field of knowledge", and this was the original meaning of "science" in English, also; mathematics is in this sense a field of knowledge. The specialization restricting the meaning of "science" to natural science follows the rise of Baconian science, which contrasted "natural science" to scholasticism, the Aristotelean method of inquiring from first principles. The role of empirical experimentation and observation is negligible in mathematics, compared to natural sciences such as biology, chemistry, or physics. Albert Einstein stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[16] More recently, Marcus du Sautoy has called mathematics "the Queen of Science ... the main driving force behind scientific discovery".[34]
+
Many philosophers believe that mathematics is not experimentally falsifiable, and thus not a science according to the definition of Karl Popper.[35] However, in the 1930s Gödel's incompleteness theorems convinced many mathematicians[who?] that mathematics cannot be reduced to logic alone, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently."[36] Other thinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.
+
An alternative view is that certain scientific fields (such as theoretical physics) are mathematics with axioms that are intended to correspond to reality. The theoretical physicist J.M. Ziman proposed that science is public knowledge, and thus includes mathematics.[37] Mathematics shares much in common with many fields in the physical sciences, notably the exploration of the logical consequences of assumptions. Intuition and experimentation also play a role in the formulation of conjectures in both mathematics and the (other) sciences. Experimental mathematics continues to grow in importance within mathematics, and computation and simulation are playing an increasing role in both the sciences and mathematics.
+
The opinions of mathematicians on this matter are varied. Many mathematicians[who?] feel that to call their area a science is to downplay the importance of its aesthetic side, and its history in the traditional seven liberal arts; others[who?] feel that to ignore its connection to the sciences is to turn a blind eye to the fact that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is created (as in art) or discovered (as in science). It is common to see universities divided into sections that include a division of Science and Mathematics, indicating that the fields are seen as being allied but that they do not coincide. In practice, mathematicians are typically grouped with scientists at the gross level but separated at finer levels. This is one of many issues considered in the philosophy of mathematics.[citation needed]
+
Inspiration, pure and applied mathematics, and aesthetics
Some mathematics is relevant only in the area that inspired it, and is applied to solve further problems in that area. But often mathematics inspired by one area proves useful in many areas, and joins the general stock of mathematical concepts. A distinction is often made between pure mathematics and applied mathematics. However pure mathematics topics often turn out to have applications, e.g. number theory in cryptography. This remarkable fact, that even the "purest" mathematics often turns out to have practical applications, is what Eugene Wigner has called "the unreasonable effectiveness of mathematics".[39] As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: there are now hundreds of specialized areas in mathematics and the latest Mathematics Subject Classification runs to 46 pages.[40] Several areas of applied mathematics have merged with related traditions outside of mathematics and become disciplines in their own right, including statistics, operations research, and computer science.
+
For those who are mathematically inclined, there is often a definite aesthetic aspect to much of mathematics. Many mathematicians talk about the elegance of mathematics, its intrinsic aesthetics and inner beauty. Simplicity and generality are valued. There is beauty in a simple and elegant proof, such as Euclid's proof that there are infinitely many prime numbers, and in an elegant numerical method that speeds calculation, such as the fast Fourier transform. G.H. Hardy in A Mathematician's Apology expressed the belief that these aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He identified criteria such as significance, unexpectedness, inevitability, and economy as factors that contribute to a mathematical aesthetic.[41] Mathematicians often strive to find proofs that are particularly elegant, proofs from "The Book" of God according to Paul Erdős.[42][43] The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.
+Leonhard Euler, who created and popularized much of the mathematical notation used today
+
+
+
Most of the mathematical notation in use today was not invented until the 16th century.[44] Before that, mathematics was written out in words, limiting mathematical discovery.[45]Euler (1707–1783) was responsible for many of the notations in use today. Modern notation makes mathematics much easier for the professional, but beginners often find it daunting. It is compressed: a few symbols contain a great deal of information. Like musical notation, modern mathematical notation has a strict syntax and encodes information that would be difficult to write in any other way.
+
Mathematical language can be difficult to understand for beginners. Common words such as or and only have more precise meanings than in everyday speech. Moreover, words such as open and field have specialized mathematical meanings. Technical terms such as homeomorphism and integrable have precise meanings in mathematics. Additionally, shorthand phrases such as iff for "if and only if" belong to mathematical jargon. There is a reason for special notation and technical vocabulary: mathematics requires more precision than everyday speech. Mathematicians refer to this precision of language and logic as "rigor".
+
Mathematical proof is fundamentally a matter of rigor. Mathematicians want their theorems to follow from axioms by means of systematic reasoning. This is to avoid mistaken "theorems", based on fallible intuitions, of which many instances have occurred in the history of the subject.[46] The level of rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but at the time of Isaac Newton the methods employed were less rigorous. Problems inherent in the definitions used by Newton would lead to a resurgence of careful analysis and formal proof in the 19th century. Misunderstanding the rigor is a cause for some of the common misconceptions of mathematics. Today, mathematicians continue to argue among themselves about computer-assisted proofs. Since large computations are hard to verify, such proofs may not be sufficiently rigorous.[47]
+
Axioms in traditional thought were "self-evident truths", but that conception is problematic.[48] At a formal level, an axiom is just a string of symbols, which has an intrinsic meaning only in the context of all derivable formulas of an axiomatic system. It was the goal of Hilbert's program to put all of mathematics on a firm axiomatic basis, but according to Gödel's incompleteness theorem every (sufficiently powerful) axiomatic system has undecidable formulas; and so a final axiomatization of mathematics is impossible. Nonetheless mathematics is often imagined to be (as far as its formal content) nothing but set theory in some axiomatization, in the sense that every mathematical statement or proof could be cast into formulas within set theory.[49]
+An abacus, a simple calculating tool used since ancient times
+
+
+
Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also subdivisions dedicated to exploring links from the heart of mathematics to other fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), and more recently to the rigorous study of uncertainty. While some areas might seem unrelated, the Langlands program has found connections between areas previously thought unconnected, such as Galois groups, Riemann surfaces and number theory.
+
Foundations and philosophy
+
In order to clarify the foundations of mathematics, the fields of mathematical logic and set theory were developed. Mathematical logic includes the mathematical study of logic and the applications of formal logic to other areas of mathematics; set theory is the branch of mathematics that studies sets or collections of objects. Category theory, which deals in an abstract way with mathematical structures and relationships between them, is still in development. The phrase "crisis of foundations" describes the search for a rigorous foundation for mathematics that took place from approximately 1900 to 1930.[50] Some disagreement about the foundations of mathematics continues to the present day. The crisis of foundations was stimulated by a number of controversies at the time, including the controversy over Cantor's set theory and the Brouwer–Hilbert controversy.
+
Mathematical logic is concerned with setting mathematics within a rigorous axiomatic framework, and studying the implications of such a framework. As such, it is home to Gödel's incompleteness theorems which (informally) imply that any effective formal system that contains basic arithmetic, if sound (meaning that all theorems that can be proven are true), is necessarily incomplete (meaning that there are true theorems which cannot be proved in that system). Whatever finite collection of number-theoretical axioms is taken as a foundation, Gödel showed how to construct a formal statement that is a true number-theoretical fact, but which does not follow from those axioms. Therefore, no formal system is a complete axiomatization of full number theory. Modern logic is divided into recursion theory, model theory, and proof theory, and is closely linked to theoretical computer science,[citation needed] as well as to category theory.
+
Theoretical computer science includes computability theory, computational complexity theory, and information theory. Computability theory examines the limitations of various theoretical models of the computer, including the most well-known model – the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with the rapid advancement of computer hardware. A famous problem is the "P = NP?" problem, one of the Millennium Prize Problems.[51] Finally, information theory is concerned with the amount of data that can be stored on a given medium, and hence deals with concepts such as compression and entropy.
As the number system is further developed, the integers are recognized as a subset of the rational numbers ("fractions"). These, in turn, are contained within the real numbers, which are used to represent continuous quantities. Real numbers are generalized to complex numbers. These are the first steps of a hierarchy of numbers that goes on to include quaternions and octonions. Consideration of the natural numbers also leads to the transfinite numbers, which formalize the concept of "infinity". Another area of study is size, which leads to the cardinal numbers and then to another conception of infinity: the aleph numbers, which allow meaningful comparison of the size of infinitely large sets.
Many mathematical objects, such as sets of numbers and functions, exhibit internal structure as a consequence of operations or relations that are defined on the set. Mathematics then studies properties of those sets that can be expressed in terms of that structure; for instance number theory studies properties of the set of integers that can be expressed in terms of arithmetic operations. Moreover, it frequently happens that different such structured sets (or structures) exhibit similar properties, which makes it possible, by a further step of abstraction, to state axioms for a class of structures, and then study at once the whole class of structures satisfying these axioms. Thus one can study groups, rings, fields and other abstract systems; together such studies (for structures defined by algebraic operations) constitute the domain of abstract algebra.
+
By its great generality, abstract algebra can often be applied to seemingly unrelated problems; for instance a number of ancient problems concerning compass and straightedge constructions were finally solved using Galois theory, which involves field theory and group theory. Another example of an algebraic theory is linear algebra, which is the general study of vector spaces, whose elements called vectors have both quantity and direction, and can be used to model (relations between) points in space. This is one example of the phenomenon that the originally unrelated areas of geometry and algebra have very strong interactions in modern mathematics. Combinatorics studies ways of enumerating the number of objects that fit a given structure.
Understanding and describing change is a common theme in the natural sciences, and calculus was developed as a powerful tool to investigate it. Functions arise here, as a central concept describing a changing quantity. The rigorous study of real numbers and functions of a real variable is known as real analysis, with complex analysis the equivalent field for the complex numbers. Functional analysis focuses attention on (typically infinite-dimensional) spaces of functions. One of many applications of functional analysis is quantum mechanics. Many problems lead naturally to relationships between a quantity and its rate of change, and these are studied as differential equations. Many phenomena in nature can be described by dynamical systems; chaos theory makes precise the ways in which many of these systems exhibit unpredictable yet still deterministic behavior.
Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice.
+
In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in pure mathematics.
+
Statistics and other decision sciences
+
Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments;[52] the design of a statistical sample or experiment specifies the analysis of the data (before the data be available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference – with model selection and estimation; the estimated models and consequential predictions should be tested on new data.[53]
Arguably the most prestigious award in mathematics is the Fields Medal,[56][57] established in 1936 and awarded every four years (except around World War II) to as many as four individuals. The Fields Medal is often considered a mathematical equivalent to the Nobel Prize.
+
The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement, and another major international award, the Abel Prize, was introduced in 2003. The Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades are awarded in recognition of a particular body of work, which may be innovational, or provide a solution to an outstanding problem in an established field.
+
A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least nine of the problems have now been solved. A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. A solution to each of these problems carries a $1 million reward, and only one (the Riemann hypothesis) is duplicated in Hilbert's problems.
^No likeness or description of Euclid's physical appearance made during his lifetime survived antiquity. Therefore, Euclid's depiction in works of art depends on the artist's imagination (see Euclid).
+
^ ab"mathematics, n.". Oxford English Dictionary. Oxford University Press. 2012. Retrieved June 16, 2012. The science of space, number, quantity, and arrangement, whose methods involve logical reasoning and usually the use of symbolic notation, and which includes geometry, arithmetic, algebra, and analysis.
+
^Kneebone, G.T. (1963). Mathematical Logic and the Foundations of Mathematics: An Introductory Survey. Dover. pp. 4. ISBN0-486-41712-3. Mathematics ... is simply the study of abstract structures, or formal patterns of connectedness.
+
^LaTorre, Donald R., John W. Kenelly, Iris B. Reed, Laurel R. Carpenter, and Cynthia R Harris (2011). Calculus Concepts: An Informal Approach to the Mathematics of Change. Cengage Learning. pp. 2. ISBN1-4390-4957-2. Calculus is the study of change—how things change, and how quickly they change.
+
^Ramana (2007). Applied Mathematics. Tata McGraw–Hill Education. p. 2.10. ISBN0-07-066753-5. The mathematical study of change, motion, growth or decay is calculus.
+
^Ziegler, Günter M. (2011). "What Is Mathematics?". An Invitation to Mathematics: From Competitions to Research. Springer. pp. 7. ISBN3-642-19532-6.
+
^ abcdMura, Roberta (Dec 1993). "Images of Mathematics Held by University Teachers of Mathematical Sciences". Educational Studies in Mathematics25 (4): 375–385.
+
^ abTobies, Renate and Helmut Neunzert (2012). Iris Runge: A Life at the Crossroads of Mathematics, Science, and Industry. Springer. pp. 9. ISBN3-0348-0229-3. It is first necessary to ask what is meant by mathematics in general. Illustrious scholars have debated this matter until they were blue in the face, and yet no consensus has been reached about whether mathematics is a natural science, a branch of the humanities, or an art form.
^Devlin, Keith, Mathematics: The Science of Patterns: The Search for Order in Life, Mind and the Universe (Scientific American Paperback Library) 1996, ISBN 978-0-7167-5047-5
^Hilbert, D. (1919–20), Natur und Mathematisches Erkennen: Vorlesungen, gehalten 1919–1920 in Göttingen. Nach der Ausarbeitung von Paul Bernays (Edited and with an English introduction by David E. Rowe), Basel, Birkhäuser (1992).
+
^ abEinstein, p. 28. The quote is Einstein's answer to the question: "how can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" He, too, is concerned with The Unreasonable Effectiveness of Mathematics in the Natural Sciences.
^Dehaene, Stanislas; Dehaene-Lambertz, Ghislaine; Cohen, Laurent (Aug 1998). "Abstract representations of numbers in the animal and human brain". Trends in Neuroscience21 (8): 355–361. doi:10.1016/S0166-2236(98)01263-6. PMID9720604.
+
^See, for example, Raymond L. Wilder, Evolution of Mathematical Concepts; an Elementary Study, passim
^ abcSnapper, Ernst (September 1979). "The Three Crises in Mathematics: Logicism, Intuitionism, and Formalism". Mathematics Magazine52 (4): 207–16. doi:10.2307/2689412. JSTOR2689412.
^See false proof for simple examples of what can go wrong in a formal proof.
+
^Ivars Peterson, The Mathematical Tourist, Freeman, 1988, ISBN 0-7167-1953-3. p. 4 "A few complain that the computer program can't be verified properly", (in reference to the Haken–Apple proof of the Four Color Theorem).
+
^" The method of "postulating" what we want has many advantages; they are the same as the advantages of theft over honest toil." Bertrand Russell (1919), Introduction to Mathematical Philosophy, New York and London, p 71.
+
^Patrick Suppes, Axiomatic Set Theory, Dover, 1972, ISBN 0-486-61630-4. p. 1, "Among the many branches of modern mathematics set theory occupies a unique place: with a few rare exceptions the entities which are studied and analyzed in mathematics may be regarded as certain particular sets or classes of objects."
+
^Luke Howard Hodgkin & Luke Hodgkin, A History of Mathematics, Oxford University Press, 2005.
^Like other mathematical sciences such as physics and computer science, statistics is an autonomous discipline rather than a branch of applied mathematics. Like research physicists and computer scientists, research statisticians are mathematical scientists. Many statisticians have a degree in mathematics, and some statisticians are also mathematicians.
+
^Rao, C.R. (1981). "Foreword". In Arthanari, T.S.; Dodge, Yadolah. Mathematical programming in statistics. Wiley Series in Probability and Mathematical Statistics. New York: Wiley. pp. vii–viii. ISBN0-471-08073-X. MR607328.
Courant, Richard and H. Robbins, What Is Mathematics? : An Elementary Approach to Ideas and Methods, Oxford University Press, USA; 2 edition (July 18, 1996). ISBN 0-19-510519-2.
Pappas, Theoni, The Joy Of Mathematics, Wide World Publishing; Revised edition (June 1989). ISBN 0-933174-65-9.
+
Peirce, Benjamin (1881). Peirce, Charles Sanders, ed. "Linear associative algebra". American Journal of Mathematics (Corrected, expanded, and annotated revision with an 1875 paper by B. Peirce and annotations by his son, C.S. Peirce, of the 1872 lithograph ed.) (Johns Hopkins University) 4 (1–4): 97–229. doi:10.2307/2369153. JSTOR2369153. Corrected, expanded, and annotated revision with an 1875 paper by B. Peirce and annotations by his son, C. S. Peirce, of the 1872 lithograph ed. GoogleEprint and as an extract, D. Van Nostrand, 1882, GoogleEprint..
+
Peterson, Ivars, Mathematical Tourist, New and Updated Snapshots of Modern Mathematics, Owl Books, 2001, ISBN 0-8050-7159-8.
+
Popper, Karl R. (1995). "On knowledge". In Search of a Better World: Lectures and Essays from Thirty Years. Routledge. ISBN0-415-13548-6.
Boyer, Carl B., A History of Mathematics, Wiley; 2nd edition, revised by Uta C. Merzbach, (March 6, 1991). ISBN 0-471-54397-7.—A concise history of mathematics from the Concept of Number to contemporary Mathematics.
Jourdain, Philip E. B., The Nature of Mathematics, in The World of Mathematics, James R. Newman, editor, Dover Publications, 2003, ISBN 0-486-43268-8.
+
Maier, Annaliese, At the Threshold of Exact Science: Selected Writings of Annaliese Maier on Late Medieval Natural Philosophy, edited by Steven Sargent, Philadelphia: University of Pennsylvania Press, 1982.
+
+
+
External links
+
+
+
+
Find more about
+Mathematics
+at Wikipedia's sister projects
Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics.
Planet Math. An online mathematics encyclopedia under construction, focusing on modern mathematics. Uses the Attribution-ShareAlike license, allowing article exchange with Wikipedia. Uses TeX markup.
Modern philosophy is a branch of philosophy that originated in Western Europe in the 17th century, and is now common worldwide. It is not a specific doctrine or school (and thus should not be confused with Modernism), although there are certain assumptions common to much of it, which helps to distinguish it from earlier philosophy.[1]
The 17th and early 20th centuries roughly mark the beginning and the end of modern philosophy. How much if any of the Renaissance should be included is a matter for dispute; likewise modernity may or may not have ended in the twentieth century and been replaced by postmodernity. How one decides these questions will determine the scope of one's use of "modern philosophy." This article will focus on the history of philosophy beginning from Rene Descartes through the early twentieth century ending in Ludwig Wittgenstein.
In the late eighteenth century Immanuel Kant set forth a groundbreaking philosophical system which claimed to bring unity to rationalism and empiricism. Whether or not he was right, he did not entirely succeed in ending philosophical dispute. Kant sparked a storm of philosophical work in Germany in the early nineteenth century, beginning with German idealism. The characteristic theme of idealism was that the world and the mind equally must be understood according to the same categories; it culminated in the work of Georg Wilhelm Friedrich Hegel, who among many other things said that "The real is rational; the rational is real."
+
Hegel's work was carried in many directions by his followers and critics. Karl Marx appropriated both Hegel's philosophy of history and the empirical ethics dominant in Britain, transforming Hegel's ideas into a strictly materialist form, setting the grounds for the development of a science of society. Søren Kierkegaard, in contrast, dismissed all systematic philosophy as an inadequate guide to life and meaning. For Kierkegaard, life is meant to be lived, not a mystery to be solved. Arthur Schopenhauer took idealism to the conclusion that the world was nothing but the futile endless interplay of images and desires, and advocated atheism and pessimism. Schopenhauer's ideas were taken up and transformed by Nietzsche, who seized upon their various dismissals of the world to proclaim "God is dead" and to reject all systematic philosophy and all striving for a fixed truth transcending the individual. Nietzsche found in this not grounds for pessimism, but the possibility of a new kind of freedom.
+
19th-century British philosophy came increasingly to be dominated by strands of neo-Hegelian thought, and as a reaction against this, figures such as Bertrand Russell and George Edward Moore began moving in the direction of analytic philosophy, which was essentially an updating of traditional empiricism to accommodate the new developments in logic of the German mathematician Gottlob Frege.
Modern philosophy traditionally begins with René Descartes and his dictum "I think, therefore I am". In the early seventeenth century the bulk of philosophy was dominated by Scholasticism, written by theologians and drawing upon Plato, Aristotle, and early Church writings. Descartes argued that many predominant Scholastic metaphysical doctrines were meaningless or false. In short, he proposed to begin philosophy from scratch. In his most important work, Meditations on First Philosophy, he attempts just this, over six brief essays. He tries to set aside as much as he possibly can of all his beliefs, to determine what if anything he knows for certain. He finds that he can doubt nearly everything: the reality of physical objects, God, his memories, history, science, even mathematics, but he cannot doubt that he is, in fact, doubting. He knows what he is thinking about, even if it is not true, and he knows that he is there thinking about it. From this basis he builds his knowledge back up again. He finds that some of the ideas he has could not have originated from him alone, but only from God; he proves that God exists. He then demonstrates that God would not allow him to be systematically deceived about everything; in essence, he vindicates ordinary methods of science and reasoning, as fallible but not false.
Empiricism is a theory of knowledge which opposes other theories of knowledge, such as rationalism, idealism and historicism. Empiricism asserts that knowledge comes (only or primarily) via sensory experience as opposed to rationalism, which asserts that knowledge comes (also) from pure thinking. Both empiricism and rationalism are individualist theories of knowledge, whereas historicism is a social epistemology. While historicism also acknowledges the role of experience, it differs from empiricism by assuming that sensory data cannot be understood without considering the historical and cultural circumstances in which observations are made. Empiricism should not be mixed up with empirical research because different epistemologies should be considered competing views on how best to do studies, and there is near consensus among researchers that studies should be empirical. Today empiricism should therefore be understood as one among competing ideals of getting knowledge or how to do studies. As such empiricism is first and foremost characterized by the ideal to let observational data "speak for themselves", while the competing views are opposed to this ideal. The term empiricism should thus not just be understood in relation to how this term has been used in the history of philosophy. It should also be constructed in a way which makes it possible to distinguish empiricism among other epistemological positions in contemporary science and scholarship. In other words: Empiricism as a concept has to be constructed along with other concepts, which together make it possible to make important discriminations between different ideals underlying contemporary science.
+
Empiricism is one of several competing views that predominate in the study of human knowledge, known as epistemology. Empiricism emphasizes the role of experience and evidence, especially sensory perception, in the formation of ideas, over the notion of innate ideas or tradition[2] in contrast to, for example, rationalism which relies upon reason and can incorporate innate knowledge.
Political philosophy is the study of such topics as politics, liberty, justice, property, rights, law, and the enforcement of a legal code by authority: what they are, why (or even if) they are needed, what, if anything, makes a government legitimate, what rights and freedoms it should protect and why, what form it should take and why, what the law is, and what duties citizens owe to a legitimate government, if any, and when it may be legitimately overthrown—if ever. In a vernacular sense, the term "political philosophy" often refers to a general view, or specific ethic, political belief or attitude, about politics that does not necessarily belong to the technical discipline of philosophy.[3]
Idealism refers to the group of philosophies which assert that reality, or reality as we can know it, is fundamentally a construct of the mind or otherwise immaterial. Epistemologically, idealism manifests as a skepticism about the possibility of knowing any mind-independent thing. In a sociological sense, idealism emphasizes how human ideas—especially beliefs and values—shape society.[4] As an ontological doctrine, idealism goes further, asserting that all entities are composed of mind or spirit.[5] Idealism thus rejects physicalist and dualist theories that fail to ascribe priority to the mind. An extreme version of this idealism can exist in the philosophical notion of solipsism.
Existentialism is generally considered to be the philosophical and cultural movement which holds that the starting point of philosophical thinking must be the individual and the experiences of the individual. Building on that, existentialists hold that moral thinking and scientific thinking together do not suffice to understand human existence, and, therefore, a further set of categories, governed by the norm of authenticity, is necessary to understand human existence.[6][7][8]
Phenomenology is the study of the structure of experience. It is a broad philosophical movement founded in the early years of the 20th century by Edmund Husserl, expanded upon by a circle of his followers at the universities of Göttingen and Munich in Germany. The philosophy then spread to France, the United States, and elsewhere, often in contexts far removed from Husserl's early work.[9]
^Baird, Forrest E.; Walter Kaufmann (2008). From Plato to Derrida. Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN0-13-158591-6.
+
^Baird, Forrest E.; Walter Kaufmann (2008). From Plato to Derrida. Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN0-13-158591-6.
+
^Hampton, Jean (1997). Political philosophy. p. xiii. ISBN9780813308586.Charles Blattberg, who defines politics as "responding to conflict with dialogue," suggests that political philosophies offer philosophical accounts of that dialogue. See his "Political Philosophies and Political Ideologies". SSRN1755117. in Patriotic Elaborations: Essays in Practical Philosophy, Montreal and Kingston: McGill-Queen's University Press, 2009.
+
^Macionis, John J. (2012). Sociology 14th Edition. Boston: Pearson. p. 88. ISBN978-0-205-11671-3.
^"Without exception, the best philosophy departments in the United States are dominated by analytic philosophy, and among the leading philosophers in the United States, all but a tiny handful would be classified as analytic philosophers. Practitioners of types of philosophizing that are not in the analytic tradition—such as phenomenology, classical pragmatism, existentialism, or Marxism—feel it necessary to define their position in relation to analytic philosophy." John Searle (2003) Contemporary Philosophy in the United States in N. Bunnin and E.P. Tsui-James (eds.), The Blackwell Companion to Philosophy, 2nd ed., (Blackwell, 2003), p. 1.
+
^See, e.g., Avrum Stroll, Twentieth-Century Analytic Philosophy (Columbia University Press, 2000), p. 5: "[I]t is difficult to give a precise definition of 'analytic philosophy' since it is not so much a specific doctrine as a loose concatenation of approaches to problems." Also, see ibid., p. 7: "I think Sluga is right in saying 'it may be hopeless to try to determine the essence of analytic philosophy.' Nearly every proposed definition has been challenged by some scholar. [...] [W]e are dealing with a family resemblance concept."
+
^See Hans-Johann Glock, What Is Analytic Philosophy (Cambridge University Press, 2008), p. 205: "The answer to the title question, then, is that analytic philosophy is a tradition held together both by ties of mutual influence and by family resemblances."
+
^Brian Leiter (2006) webpage “Analytic” and “Continental” Philosophy. Quote on the definition: "'Analytic' philosophy today names a style of doing philosophy, not a philosophical program or a set of substantive views. Analytic philosophers, crudely speaking, aim for argumentative clarity and precision; draw freely on the tools of logic; and often identify, professionally and intellectually, more closely with the sciences and mathematics, than with the humanities."
+
^H. Glock, "Was Wittgenstein an Analytic Philosopher?", Metaphilosophy, 35:4 (2004), pp. 419–444.
+
^Colin McGinn, The Making of a Philosopher: My Journey through Twentieth-Century Philosophy (HarperCollins, 2002), p. xi.: "analytical philosophy [is] too narrow a label, since [it] is not generally a matter of taking a word or concept and analyzing it (whatever exactly that might be). [...] This tradition emphasizes clarity, rigor, argument, theory, truth. It is not a tradition that aims primarily for inspiration or consolation or ideology. Nor is it particularly concerned with 'philosophy of life,' though parts of it are. This kind of philosophy is more like science than religion, more like mathematics than poetry – though it is neither science nor mathematics."
As a method, philosophy is often distinguished from other ways of addressing such problems by its questioning, critical, generally systematic approach and its reliance on rational argument.[10] As a noun, the term "philosophy" can refer to any body of knowledge.[11] Historically, these bodies of knowledge were commonly divided into natural philosophy, moral philosophy, and metaphysical philosophy.[9] In casual speech, the term can refer to any of "the most basic beliefs, concepts, and attitudes of an individual or group," (e.g., "Dr. Smith's philosophy of parenting").[12]
Skepticism is the position which questions the possibility of completely justifying any truth. The regress argument, a fundamental problem in epistemology, occurs when, in order to completely prove any statement, its justification itself needs to be supported by another justification. This chain can do three possible options, all of which are unsatisfactory according to the Münchhausen trilemma. One option is infinitism, where this chain of justification can go on forever. Another option is foundationalism, where the chain of justifications eventually relies on basic beliefs or axioms that are left unproven. The last option, such as in coherentism, is making the chain circular so that a statement is included in its own chain of justification.
+
Rationalism is the emphasis on reasoning as a source of knowledge. Empiricism is the emphasis on observational evidence via sensory experience over other evidence as the source of knowledge. Rationalism claims that every possible object of knowledge can be deduced from coherent premises without observation. Empiricism claims that at least some knowledge is only a matter of observation. For this, Empiricism often cites the concept of tabula rasa, where individuals are not born with mental content and that knowledge builds from experience or perception. Epistemological solipsism is the idea that the existence of the world outside the mind is an unresolvable question.
Parmenides (fl. 500 BC) argued that it is impossible to doubt that thinking actually occurs. But thinking must have an object, therefore something beyond thinking really exists. Parmenides deduced that what really exists must have certain properties—for example, that it cannot come into existence or cease to exist, that it is a coherent whole, that it remains the same eternally (in fact, exists altogether outside time). This is known as the third man argument. Plato (427–347 BC) combined rationalism with a form of realism. The philosopher's work is to consider being, and the essence (ousia) of things. But the characteristic of essences is that they are universal. The nature of a man, a triangle, a tree, applies to all men, all triangles, all trees. Plato argued that these essences are mind-independent "forms", that humans (but particularly philosophers) can come to know by reason, and by ignoring the distractions of sense-perception.
+
Modern rationalism begins with Descartes. Reflection on the nature of perceptual experience, as well as scientific discoveries in physiology and optics, led Descartes (and also Locke) to the view that we are directly aware of ideas, rather than objects. This view gave rise to three questions:
+
+
Is an idea a true copy of the real thing that it represents? Sensation is not a direct interaction between bodily objects and our sense, but is a physiological process involving representation (for example, an image on the retina). Locke thought that a "secondary quality" such as a sensation of green could in no way resemble the arrangement of particles in matter that go to produce this sensation, although he thought that "primary qualities" such as shape, size, number, were really in objects.
+
How can physical objects such as chairs and tables, or even physiological processes in the brain, give rise to mental items such as ideas? This is part of what became known as the mind-body problem.
+
If all the contents of awareness are ideas, how can we know that anything exists apart from ideas?
+
+
Descartes tried to address the last problem by reason. He began, echoing Parmenides, with a principle that he thought could not coherently be denied: I think, therefore I am (often given in his original Latin: Cogito ergo sum). From this principle, Descartes went on to construct a complete system of knowledge (which involves proving the existence of God, using, among other means, a version of the ontological argument).[17] His view that reason alone could yield substantial truths about reality strongly influenced those philosophers usually considered modern rationalists (such as Baruch Spinoza, Gottfried Leibniz, and Christian Wolff), while provoking criticism from other philosophers who have retrospectively come to be grouped together as empiricists.
Metaphysics is the study of the most general features of reality, such as existence, time, the relationship between mind and body, objects and their properties, wholes and their parts, events, processes, and causation. Traditional branches of metaphysics include cosmology, the study of the world in its entirety, and ontology, the study of being.
+
Within metaphysics itself there are a wide range of differing philosophical theories. Idealism, for example, is the belief that reality is mentally constructed or otherwise immaterial while realism holds that reality, or at least some part of it, exists independently of the mind. Subjective idealism describes objects as no more than collections or "bundles" of sense data in the perceiver. The 18th-century philosopher George Berkeley contended that existence is fundamentally tied to perception with the phrase Esse est aut percipi aut percipere or "To be is to be perceived or to perceive".[18]
+
In addition to the aforementioned views, however, there is also an ontological dichotomy within metaphysics between the concepts of particulars and universals as well. Particulars are those objects that are said to exist in space and time, as opposed to abstract objects, such as numbers. Universals are properties held by multiple particulars, such as redness or a gender. The type of existence, if any, of universals and abstract objects is an issue of serious debate within metaphysical philosophy. Realism is the philosophical position that universals do in fact exist, while nominalism is the negation, or denial of universals, abstract objects, or both.[19]Conceptualism holds that universals exist, but only within the mind's perception.[20]
+
The question of whether or not existence is a predicate has been discussed since the Early Modern period. Essence is the set of attributes that make an object what it fundamentally is and without which it loses its identity. Essence is contrasted with accident: a property that the substance has contingently, without which the substance can still retain its identity.
Ethics, or "moral philosophy," is concerned primarily with the question of the best way to live, and secondarily, concerning the question of whether this question can be answered. The main branches of ethics are meta-ethics, normative ethics, and applied ethics. Meta-ethics concerns the nature of ethical thought, such as the origins of the words good and bad, and origins of other comparative words of various ethical systems, whether there are absolute ethical truths, and how such truths could be known. Normative ethics are more concerned with the questions of how one ought to act, and what the right course of action is. This is where most ethical theories are generated. Lastly, applied ethics go beyond theory and step into real world ethical practice, such as questions of whether or not abortion is correct. Ethics is also associated with the idea of morality, and the two are often interchangeable.
One debate that has commanded the attention of ethicists in the modern era has been between consequentialism (actions are to be morally evaluated solely by their consequences) and deontology (actions are to be morally evaluated solely by consideration of agents' duties, the rights of those whom the action concerns, or both). Jeremy Bentham and John Stuart Mill are famous for promulgating utilitarianism, which is the idea that the fundamental moral rule is to strive toward the "greatest happiness for the greatest number". However, in promoting this idea they also necessarily promoted the broader doctrine of consequentialism. Adopting a position opposed to consequentialism, Immanuel Kant argued that moral principles were simply products of reason. Kant believed that the incorporation of consequences into moral deliberation was a deep mistake, since it denies the necessity of practical maxims in governing the working of the will. According to Kant, reason requires that we conform our actions to the categorical imperative, which is an absolute duty. An important 20th-century deontologist, W.D. Ross, argued for weaker forms of duties called prima facie duties.
+
More recent works have emphasized the role of character in ethics, a movement known as the aretaic turn (that is, the turn towards virtues). One strain of this movement followed the work of Bernard Williams. Williams noted that rigid forms of consequentialism and deontology demanded that people behave impartially. This, Williams argued, requires that people abandon their personal projects, and hence their personal integrity, in order to be considered moral. Elizabeth Anscombe, in an influential paper, "Modern Moral Philosophy" (1958), revived virtue ethics as an alternative to what was seen as the entrenched positions of Kantianism and consequentialism. Aretaic perspectives have been inspired in part by research of ancient conceptions of virtue. For example, Aristotle's ethics demands that people follow the Aristotelian mean, or balance between two vices; and Confucian ethics argues that virtue consists largely in striving for harmony with other people. Virtue ethics in general has since gained many adherents, and has been defended by such philosophers as Philippa Foot, Alasdair MacIntyre, and Rosalind Hursthouse.
Political philosophy is the study of government and the relationship of individuals (or families and clans) to communities including the state. It includes questions about justice, law, property, and the rights and obligations of the citizen. Politics and ethics are traditionally inter-linked subjects, as both discuss the question of what is good and how people should live. From ancient times, and well beyond them, the roots of justification for political authority were inescapably tied to outlooks on human nature. In The Republic, Plato presented the argument that the ideal society would be run by a council of philosopher-kings, since those best at philosophy are best able to realize the good. Even Plato, however, required philosophers to make their way in the world for many years before beginning their rule at the age of fifty.
+
For Aristotle, humans are political animals (i.e. social animals), and governments are set up to pursue good for the community. Aristotle reasoned that, since the state (polis) was the highest form of community, it has the purpose of pursuing the highest good. Aristotle viewed political power as the result of natural inequalities in skill and virtue. Because of these differences, he favored an aristocracy of the able and virtuous. For Aristotle, the person cannot be complete unless he or she lives in a community. His The Nicomachean Ethics and The Politics are meant to be read in that order. The first book addresses virtues (or "excellences") in the person as a citizen; the second addresses the proper form of government to ensure that citizens will be virtuous, and therefore complete. Both books deal with the essential role of justice in civic life.
+
Nicolas of Cusa rekindled Platonic thought in the early 15th century. He promoted democracy in Medieval Europe, both in his writings and in his organization of the Council of Florence. Unlike Aristotle and the Hobbesian tradition to follow, Cusa saw human beings as equal and divine (that is, made in God's image), so democracy would be the only just form of government. Cusa's views are credited by some as sparking the Italian Renaissance, which gave rise to the notion of "Nation-States".
Later, Niccolò Machiavelli rejected the views of Aristotle and Thomas Aquinas as unrealistic. The ideal sovereign is not the embodiment of the moral virtues; rather the sovereign does whatever is successful and necessary, rather than what is morally praiseworthy. Thomas Hobbes also contested many elements of Aristotle's views. For Hobbes, human nature is essentially anti-social: people are essentially egoistic, and this egoism makes life difficult in the natural state of things. Moreover, Hobbes argued, though people may have natural inequalities, these are trivial, since no particular talents or virtues that people may have will make them safe from harm inflicted by others. For these reasons, Hobbes concluded that the state arises from a common agreement to raise the community out of the state of nature. This can only be done by the establishment of a sovereign, in which (or whom) is vested complete control over the community, and is able to inspire awe and terror in its subjects.[21]
Many in the Enlightenment were unsatisfied with existing doctrines in political philosophy, which seemed to marginalize or neglect the possibility of a democratic state. Jean-Jacques Rousseau was among those who attempted to overturn these doctrines: he responded to Hobbes by claiming that a human is by nature a kind of "noble savage", and that society and social contracts corrupt this nature. Another critic was John Locke. In Second Treatise on Government he agreed with Hobbes that the nation-state was an efficient tool for raising humanity out of a deplorable state, but he argued that the sovereign might become an abominable institution compared to the relatively benign unmodulated state of nature.[22]
+
Following the doctrine of the fact-value distinction, due in part to the influence of David Hume and his student Adam Smith, appeals to human nature for political justification were weakened. Nevertheless, many political philosophers, especially moral realists, still make use of some essential human nature as a basis for their arguments.
+
Marxism is derived from the work of Karl Marx and Friedrich Engels. Their idea that capitalism is based on exploitation of workers and causes alienation of people from their human nature, the historical materialism, their view of social classes, etc., have influenced many fields of study, such as sociology, economics, and politics. Marxism inspired the Marxist school of communism, which brought a huge impact on the history of the 20th century.
Aesthetics deals with beauty, art, enjoyment, sensory-emotional values, perception, and matters of taste and sentiment. It is a branch of philosophy dealing with the nature of art, beauty, and taste, with the creation and appreciation of beauty.[23][24] It is more scientifically defined as the study of sensory or sensori-emotional values, sometimes called judgments of sentiment and taste.[25] More broadly, scholars in the field define aesthetics as "critical reflection on art, culture and nature."[26][27]
+
More specific aesthetic theory, often with practical implications, relating to a particular branch of the arts is divided into areas of aesthetics such as art theory, literary theory, film theory and music theory. An example from art theory is aesthetic theory as a set of principles underlying the work of a particular artist or artistic movement: such as the Cubist aesthetic.[28]
Philosophy of law (often called jurisprudence) explores the varying theories explaining the nature and the interpretations of law.
+
Philosophy of mind explores the nature of the mind, and its relationship to the body, and is typified by disputes between dualism and materialism. In recent years there has been increasing similarity between this branch of philosophy and cognitive science.
+
Philosophy of religion explores questions that often arise in connection with one or several religions, including the soul, the afterlife, God, religious experiences, analysis of religious vocabulary and texts, and the relationship of religion and science.
+
Philosophy of science explores the foundations, methods, history, implications, and purpose of science.
+
Feminist philosophy explores questions surrounding gender, sexuality, and the body including the nature of feminism itself as a social and philosophical movement.
+
Philosophy of film analyzes films and filmmakers for their philosophical content and style explores film (images, cinema, etc.) as a medium for philosophical reflection and expression.
+
Metaphilosophy explores the aims of philosophy, its boundaries, and its methods.
+
+
Many academic disciplines have also generated philosophical inquiry. These include history, logic, and mathematics.
There are authors who date the philosophical maxims of Ptahhotep before the 25th century. For instance, Pulitzer Prize–winning historian Will Durant dates these writings as early as 2880 BCE within The Story of Civilization: Our Oriental History. Durant claims that Ptahhotep could be considered the very first philosopher in virtue of having the earliest and surviving fragments of moral philosophy (i.e., "The Maxims of Ptah-Hotep").[30] Ptahhotep's grandson, Ptahhotep Tshefi, is traditionally credited with being the author of the collection of wise sayings known as The Maxims of Ptahhotep,[31] whose opening lines attribute authorship to the vizier Ptahhotep: "Instruction of the Mayor of the city, the Vizier Ptahhotep, under the Majesty of King Isesi".
Confucianism is humanistic,[35] philosophy that believes that human beings are teachable, improvable and perfectible through personal and communal endeavour especially including self-cultivation and self-creation. Confucianism focuses on the cultivation of virtue and maintenance of ethics, the most basic of which are ren, yi, and li.[36]Ren is an obligation of altruism and humaneness for other individuals within a community, yi is the upholding of righteousness and the moral disposition to do good, and li is a system of norms and propriety that determines how a person should properly act within a community.[36]
+
Taoism focuses on establishing harmony with the Tao, which is origin of and the totality of everything that exists. The word "Tao" (or "Dao", depending on the romanization scheme) is usually translated as "way", "path" or "principle". Taoist propriety and ethics emphasize the Three Jewels of the Tao: compassion, moderation, and humility, while Taoist thought generally focuses on nature, the relationship between humanity and the cosmos (天人相应); health and longevity; and wu wei, action through inaction. Harmony with the Universe, or the origin of it through the Tao, is the intended result of many Taoist rules and practices.
Ancient Graeco-Roman philosophy is a period of Western philosophy, starting in the 6th century [c. 585] BC to the 6th century AD. It is usually divided into three periods: the pre-Socratic period, the Ancient Classical Greek period of Plato and Aristotle, and the post-Aristotelian (or Hellenistic) period. A fourth period that is sometimes added includes the Neoplatonic and Christian philosophers of Late Antiquity. The most important of the ancient philosophers (in terms of subsequent influence) are Plato and Aristotle.[37] Plato specifically, is credited as the founder of Western philosophy. The philosopher Alfred North Whitehead said of Plato: "The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato. I do not mean the systematic scheme of thought which scholars have doubtfully extracted from his writings. I allude to the wealth of general ideas scattered through them."[38]
+
It was said in Roman Ancient history that Pythagoras was the first man to call himself a philosopher, or lover of wisdom,[39] and Pythagorean ideas exercised a marked influence on Plato, and through him, all of Western philosophy. Plato and Aristotle, the first Classical Greek philosophers, did refer critically to other simple "wise men", which were called in Greek "sophists," and which were common before Pythagoras' time. From their critique it appears that a distinction was then established in their own Classical period between the more elevated and pure "lovers of wisdom" (the true Philosophers), and these other earlier and more common traveling teachers, who often also earned money from their craft.
+
The main subjects of ancient philosophy are: understanding the fundamental causes and principles of the universe; explaining it in an economical way; the epistemological problem of reconciling the diversity and change of the natural universe, with the possibility of obtaining fixed and certain knowledge about it; questions about things that cannot be perceived by the senses, such as numbers, elements, universals, and gods. Socrates is said to have been the initiator of more focused study upon the human things including the analysis of patterns of reasoning and argument and the nature of the good life and the importance of understanding and knowledge in order to pursue it; the explication of the concept of justice, and its relation to various political systems.[37]
+
In this period the crucial features of the Western philosophical method were established: a critical approach to received or established views, and the appeal to reason and argumentation. This includes Socrates' dialectic method of inquiry, known as the Socratic method or method of "elenchus", which he largely applied to the examination of key moral concepts such as the Good and Justice. To solve a problem, it would be broken down into a series of questions, the answers to which gradually distill the answer a person would seek. The influence of this approach is most strongly felt today in the use of the scientific method, in which hypothesis is the first stage.
The term Indian philosophy (Sanskrit: Darshanas), refers to any of several schools of philosophical thought that originated in the Indian subcontinent, including Hindu philosophy, Buddhist philosophy, and Jain philosophy. Having the same or rather intertwined origins, all of these philosophies have a common underlying themes of Dharma and Karma, and similarly attempt to explain the attainment of Moksha (liberation). They have been formalized and promulgated chiefly between 1000 BC to a few centuries AD.
+
India's philosophical tradition dates back to the composition of the Upanisads[40] in the later Vedic period (c. 1000-500 BCE). Subsequent schools (Skt: Darshanas) of Indian philosophy were identified as orthodox (Skt: astika) or non-orthodox (Skt: nastika), depending on whether or not they regarded the Vedas as an infallible source of knowledge.[41] In the history of the Indian subcontinent, following the establishment of a Vedic culture, the development of philosophical and religious thought over a period of two millennia gave rise to what came to be called the six schools of astika, or orthodox, Indian or Hindu philosophy. These schools have come to be synonymous with the greater religion of Hinduism, which was a development of the early Vedic religion. Schools of Hindu philosophy are Nyaya, Vaisesika, Samkhya, Yoga, Purva mimamsa and Vedanta. Other classifications also include Pashupata, Saiva, Raseśvara and Pāṇini Darśana with the other orthodox schools.[42]
+
Jain philosophy revolves around the concept of ahimsā (non-violence). The major contribution of the Jain philosophy was the doctrine of Anekantavada (multiplicity of view points). According to the Jain epistemology, knowledge is of five kinds – sensory knowledge, scriptural knowledge, clairvoyance, telepathy, and omniscience.[43]
Competition and integration between the various schools was intense during their formative years, especially between 500 BC to 200 AD. Some like the Jain, Buddhist, Shaiva and Vedanta schools survived, while others like Samkhya and Ajivika did not, either being assimilated or going extinct. The Sanskrit term for "philosopher" is dārśanika, one who is familiar with the systems of philosophy, or darśanas.[44]
Persian philosophy can be traced back as far as Old Iranian philosophical traditions and thoughts, with their ancient Indo-Iranian roots. These were considerably influenced by Zarathustra's teachings. Throughout Iranian history and due to remarkable political and social influences such as the Macedonian, the Arab, and the Mongol invasions of Persia, a wide spectrum of schools of thought arose. These espoused a variety of views on philosophical questions, extending from Old Iranian and mainly Zoroastrianism-influenced traditions to schools appearing in the late pre-Islamic era, such as Manicheism and Mazdakism, as well as various post-Islamic schools. Iranian philosophy after Arab invasion of Persia is characterized by different interactions with the old Iranian philosophy, the Greek philosophy and with the development of Islamic philosophy. Illuminationism and the transcendent theosophy are regarded as two of the main philosophical traditions of that era in Persia. Zoroastrianism has been identified as one of the key early events in the development of philosophy.[45]
The history of western European medieval philosophy is traditionally divided into two main periods: the period in the Latin West following the Early Middle Ages until the 12th century, when the works of Aristotle and Plato were preserved and cultivated; and the "golden age"[citation needed] of the 12th, 13th and 14th centuries in the Latin West, which witnessed the culmination of the recovery of ancient philosophy, and significant developments in the field of philosophy of religion, logic and metaphysics.
+
The medieval era was disparagingly treated by the Renaissance humanists, who saw it as a barbaric "middle" period between the classical age of Greek and Roman culture, and the "rebirth" or renaissance of classical culture. Yet this period of nearly a thousand years was the longest period of philosophical development in Europe, and possibly the richest. Jorge Gracia has argued that "in intensity, sophistication, and achievement, the philosophical flowering in the thirteenth century could be rightly said to rival the golden age of Greek philosophy in the fourth century B.C."[47]
+
Some problems discussed throughout this period are the relation of faith to reason, the existence and unity of God, the object of theology and metaphysics, the problems of knowledge, of universals, and of individuation.
Aquinas, the father of Thomism, was immensely influential in Catholic Europe; he placed a great emphasis on reason and argumentation, and was one of the first to use the new translation of Aristotle's metaphysical and epistemological writing. His work was a significant departure from the Neoplatonic and Augustinian thinking that had dominated much of early Scholasticism.
The Renaissance ("rebirth") was a period of transition between the Middle Ages and modern thought,[48] in which the recovery of classical texts helped shift philosophical interests away from technical studies in logic, metaphysics, and theology towards eclectic inquiries into morality, philology, and mysticism.[49][50] The study of the classics and the humane arts generally, such as history and literature, enjoyed a scholarly interest hitherto unknown in Christendom, a tendency referred to as humanism.[51][52] Displacing the medieval interest in metaphysics and logic, the humanists followed Petrarch in making man and his virtues the focus of philosophy.[53][54]
+
The study of classical philosophy also developed in two new ways. On the one hand, the study of Aristotle was changed through the influence of Averroism. The disagreements between these Averroist Aristotelians, and more orthodox catholic Aristotelians such as Albertus Magnus and Thomas Aquinas eventually contributed to the development of a "humanist Aristotelianism" developed in the Renaissance, as exemplified in the thought of Pietro Pomponazzi and Giacomo Zabarella. Secondly, as an alternative to Aristotle, the study of Plato and the Neoplatonists became common. This was assisted by the rediscovery of works which had not been well known previously in Western Europe. Notable Renaissance Platonists include Nicholas of Cusa, and later Marsilio Ficino and Giovanni Pico della Mirandola.[54]
+
The Renaissance also renewed interest in anti-Aristotelian theories of nature considered as an organic, living whole comprehensible independently of theology, as in the work of Nicholas of Cusa, Nicholas Copernicus, Giordano Bruno, Telesius, and Tommaso Campanella.[55] Such movements in natural philosophy dovetailed with a revival of interest in occultism, magic, hermeticism, and astrology, which were thought to yield hidden ways of knowing and mastering nature (e.g., in Marsilio Ficino and Giovanni Pico della Mirandola).[56]
+
These new movements in philosophy developed contemporaneously with larger religious and political transformations in Europe: the Reformation and the decline of feudalism. Though the theologians of the Protestant Reformation showed little direct interest in philosophy, their destruction of the traditional foundations of theological and intellectual authority harmonized with a revival of fideism and skepticism in thinkers such as Erasmus, Montaigne, and Francisco Sanches.[57][58][59] Meanwhile, the gradual centralization of political power in nation-states was echoed by the emergence of secular political philosophies, as in the works of Niccolò Machiavelli (often described as the first modern political thinker, or a key turning point towards modern political thinking[60]), Thomas More, Erasmus, Justus Lipsius, Jean Bodin, and Hugo Grotius.[61][62]
Mid-Imperial Chinese philosophy is primarily defined by the development of Neo-Confucianism. During the Tang Dynasty, Buddhism from Nepal also became a prominent philosophical and religious discipline. (It should be noted that philosophy and religion were clearly distinguished in the West, whilst these concepts were more continuous in the East due to, for example, the philosophical concepts of Buddhism.)
+
Neo-Confucianism is a philosophical movement that advocated a more rationalist and secular form of Confucianism by rejecting superstitious and mystical elements of Daoism and Buddhism that had influenced Confucianism during and after the Han Dynasty.[63] Although the Neo-Confucianists were critical of Daoism and Buddhism,[64] the two did have an influence on the philosophy, and the Neo-Confucianists borrowed terms and concepts from both. However, unlike the Buddhists and Daoists, who saw metaphysics as a catalyst for spiritual development, religious enlightenment, and immortality, the Neo-Confucianists used metaphysics as a guide for developing a rationalist ethical philosophy.[65]
+
Neo-Confucianism has its origins in the Tang Dynasty; the Confucianist scholars Han Yu and Li Ao are seen as forbears of the Neo-Confucianists of the Song Dynasty.[64] The Song Dynasty philosopher Zhou Dunyi is seen as the first true "pioneer" of Neo-Confucianism, using Daoist metaphysics as a framework for his ethical philosophy.[65]
The period between 5th and 9th centuries CE was the most brilliant epoch in the development of Indian philosophy as Hindu and Buddhist philosophies flourished side by side.[66] Of these various schools of thought the non-dualistic Advaita Vedanta emerged as the most influential[67] and most dominant school of philosophy.[68] The major philosophers of this school were Gaudapada, Adi Shankara and Vidyaranya.
+
Advaita Vedanta rejects theism and dualism by insisting that “Brahman [ultimate reality] is without parts or attributes...one without a second.” Since Brahman has no properties, contains no internal diversity and is identical with the whole reality, it cannot be understood as God.[69] Brahman though being indescribable is at best described as Satchidananda (merging "Sat" + "Chit" + "Ananda", i.e., Existence, Consciousness and Bliss) by Shankara. Advaita ushered a new era in Indian philosophy and as a result, many new schools of thought arose in the medieval period. Some of them were Visishtadvaita (qualified monism), Dvaita (dualism), Dvaitadvaita (dualism-nondualism), Suddhadvaita (pure non-dualism), Achintya Bheda Abheda and Pratyabhijña (the recognitive school).
Avicenna argued his "Floating Man" thought experiment concerning Self-awareness, in which a man prevented of sense experience by being blindfolded and free falling would still be aware of his existence.[70]
Aztec philosophy was the school of philosophy developed by the Aztec Empire. The Aztecs had a well-developed school of philosophy, perhaps the most developed in the Americas and in many ways comparable to Greek philosophy, even amassing more texts than the ancient Greeks.[71] Aztec philosophy focused on dualism, monism, and aesthetics, and Aztec philosophers attempted to answer the main Aztec philosophical question of how to gain stability and balance in an ephemeral world.
+
Aztec philosophy saw the concept of Ometeotl as a unity that underlies the universe. Ometeotl forms, shapes, and is all things. Even things in opposition—light and dark, life and death—were seen as expressions of the same unity, Ometeotl. The belief in a unity with dualistic expressions compares with similar dialectical monist ideas in both Western and Eastern philosophies.[72] Aztec priests had a panentheistic view of religion but the popular Aztec religion maintained polytheism. Priests saw the different gods as aspects of the singular and transcendent unity of teotl but the masses were allowed to practice polytheism without understanding the true, unified nature of the Aztec gods.[72]
Ethiopian philosophy is the philosophical corpus of the territories of present-day Ethiopia and Eritrea. Besides via oral tradition, it was preserved early on in written form through Ge'ez manuscripts. This philosophy occupies a unique position within African philosophy. The character of Ethiopian philosophy is determined by the particular conditions of evolution of the Ethiopian culture. Thus, Ethiopian philosophy arises from the confluence of Greek and Patristic philosophy with traditional Ethiopian modes of thought. Because of the early isolation from its sources of Abrahamic spirituality – Byzantium and Alexandria – Ethiopia received some of its philosophical heritage through Arabic versions.
+
The sapiential literature developed under these circumstances is the result of a twofold effort of creative assimilation: on one side, of a tuning of Orthodoxy to traditional modes of thought (never eradicated), and vice versa, and, on the other side, of absorption of Greek pagan and early Patristic thought into this developing Ethiopian-Christian synthesis. As a consequence, the moral reflection of religious inspiration is prevalent, and the use of narrative, parable, apothegm and rich imagery is preferred to the use of abstract argument. This sapiential literature consists in translations and adaptations of some Greek texts, namely of the Physiolog (cca. 5th century A.D.), The Life and Maxims of Skendes (11th century A.D.) and The Book of the Wise Philosophers (1510/22).
+
In the 17th century, the religious beliefs of Ethiopians were challenged by King Suseynos' adoption of Catholicism, and by a subsequent presence of Jesuit missionaries. The attempt to forcefully impose Catholicism upon his constituents during Suseynos' reign inspired further development of Ethiopian philosophy during the 17th century. Zera Yacob (1599–1692) is the most important exponent of this renaissance. His treatise Hatata (1667) is a work often included in the narrow canon of universal philosophy.
Chronologically, the early modern era of Western philosophy is usually identified with the 17th and 18th centuries, with the 18th century often being referred to as the Enlightenment.[73] Modern philosophy is distinguished from its predecessors by its increasing independence from traditional authorities such as the Church, academia, and Aristotelianism;[74][75] a new focus on the foundations of knowledge and metaphysical system-building;[76][77] and the emergence of modern physics out of natural philosophy.[78]
+
Other central topics of philosophy in this period include the nature of the mind and its relation to the body, the implications of the new natural sciences for traditional theological topics such as free will and God, and the emergence of a secular basis for moral and political philosophy.[79] These trends first distinctively coalesce in Francis Bacon's call for a new, empirical program for expanding knowledge, and soon found massively influential form in the mechanical physics and rationalist metaphysics of René Descartes.[80]
After Hegel's death in 1831, 19th-century philosophy largely turned against idealism in favor of varieties of philosophical naturalism, such as the positivism of Auguste Comte, the empiricism of John Stuart Mill, and the materialism of Karl Marx. Logic began a period of its most significant advances since the inception of the discipline, as increasing mathematical precision opened entire fields of inference to formalization in the work of George Boole and Gottlob Frege.[92] Other philosophers who initiated lines of thought that would continue to shape philosophy into the 20th century include:
Within the last century, philosophy has increasingly become a professional discipline practiced within universities, like other academic disciplines. Accordingly, it has become less general and more specialized. In the view of one prominent recent historian: "Philosophy has become a highly organized discipline, done by specialists primarily for other specialists. The number of philosophers has exploded, the volume of publication has swelled, and the subfields of serious philosophical investigation have multiplied. Not only is the broad field of philosophy today far too vast to be embraced by one mind, something similar is true even of many highly specialized subfields."[93]
+
In the English-speaking world, analytic philosophy became the dominant school for much of the 20th century. In the first half of the century, it was a cohesive school, shaped strongly by logical positivism, united by the notion that philosophical problems could and should be solved by attention to logic and language. The pioneering work of Bertrand Russell was a model for the early development of analytic philosophy, moving from a rejection of the idealism dominant in late 19th-century British philosophy to an neo-Humean empiricism, strengthened by the conceptual resources of modern mathematical logic.[94][95][96]
+
In the latter half of the 20th century, analytic philosophy diffused into a wide variety of disparate philosophical views, only loosely united by historical lines of influence and a self-identified commitment to clarity and rigor. The post-war transformation of the analytic program led in two broad directions: on one hand, an interest in ordinary language as a way of avoiding or redescribing traditional philosophical problems, and on the other, a more thoroughgoing naturalism that sought to dissolve the puzzles of modern philosophy via the results of the natural sciences (such as cognitive psychology and evolutionary biology). The shift in the work of Ludwig Wittgenstein, from a view congruent with logical positivism to a therapeutic dissolution of traditional philosophy as a linguistic misunderstanding of normal forms of life, was the most influential version of the first direction in analytic philosophy.[97][98] The later work of Russell and the philosophy of Willard Van Orman Quine are influential exemplars of the naturalist approach dominant in the second half of the 20th century.[99][100][101][102] But the diversity of analytic philosophy from the 1970s onward defies easy generalization: the naturalism of Quine and his epigoni was in some precincts superseded by a "new metaphysics" of possible worlds, as in the influential work of David Lewis.[103][104] Recently, the experimental philosophy movement has sought to reappraise philosophical problems through social science research techniques.
+
On continental Europe, no single school or temperament enjoyed dominance. The flight of the logical positivists from central Europe during the 1930s and 1940s, however, diminished philosophical interest in natural science, and an emphasis on the humanities, broadly construed, figures prominently in what is usually called "continental philosophy". 20th-century movements such as phenomenology, existentialism, modern hermeneutics, critical theory, structuralism, and poststructuralism are included within this loose category. The founder of phenomenology, Edmund Husserl, sought to study consciousness as experienced from a first-person perspective,[105][106] while Martin Heidegger drew on the ideas of Kierkegaard, Nietzsche, and Husserl to propose an unconventional existential approach to ontology.[107][108]
Forms of idealism were prevalent in philosophy from the 18th century to the early 20th century. Transcendental idealism, advocated by Immanuel Kant, is the view that there are limits on what can be understood, since there is much that cannot be brought under the conditions of objective judgment. Kant wrote his Critique of Pure Reason (1781–1787) in an attempt to reconcile the conflicting approaches of rationalism and empiricism, and to establish a new groundwork for studying metaphysics. Kant's intention with this work was to look at what we know and then consider what must be true about it, as a logical consequence of the way we know it. One major theme was that there are fundamental features of reality that escape our direct knowledge because of the natural limits of the human faculties.[109] Although Kant held that objective knowledge of the world required the mind to impose a conceptual or categorical framework on the stream of pure sensory data—a framework including space and time themselves—he maintained that things-in-themselves existed independently of our perceptions and judgments; he was therefore not an idealist in any simple sense. Kant's account of things-in-themselves is both controversial and highly complex. Continuing his work, Johann Gottlieb Fichte and Friedrich Schelling dispensed with belief in the independent existence of the world, and created a thoroughgoing idealist philosophy.
+
The most notable work of this German idealism was G. W. F. Hegel's Phenomenology of Spirit, of 1807. Hegel admitted his ideas were not new, but that all the previous philosophies had been incomplete. His goal was to correctly finish their job. Hegel asserts that the twin aims of philosophy are to account for the contradictions apparent in human experience (which arise, for instance, out of the supposed contradictions between "being" and "not being"), and also simultaneously to resolve and preserve these contradictions by showing their compatibility at a higher level of examination ("being" and "not being" are resolved with "becoming"). This program of acceptance and reconciliation of contradictions is known as the "Hegelian dialectic". Philosophers influenced by Hegel include Ludwig Andreas Feuerbach, who coined the term projection as pertaining to our inability to recognize anything in the external world without projecting qualities of ourselves upon those things; Karl Marx; Friedrich Engels; and the British idealists, notably T. H. Green, J. M. E. McTaggart and F. H. Bradley.
+
Few 20th-century philosophers have embraced idealism. However, quite a few have embraced Hegelian dialectic. Immanuel Kant's "Copernican Turn" also remains an important philosophical concept today.
Pragmatism was founded in the spirit of finding a scientific concept of truth that does not depend on personal insight (revelation) or reference to some metaphysical realm. The meaning or purport of a statement should be judged by the effect its acceptance would have on practice. Truth is that opinion which inquiry taken far enough would ultimately reach.[110] For Charles Sanders Peirce these were principles of the inquirer's self-regulation, implied by the idea and hope that inquiry is not generally fruitless. The details of how these principles should be interpreted have been subject to discussion since Peirce first conceived them. Peirce's maxim of pragmatism is as follows: "Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object."[111] Like postmodern neo-pragmatist Richard Rorty, many are convinced that pragmatism asserts that the truth of beliefs does not consist in their correspondence with reality, but in their usefulness and efficacy.[112]
+
The late 19th-century American philosophersCharles Sanders Peirce and William James were its co-founders, and it was later developed by John Dewey as instrumentalism. Since the usefulness of any belief at any time might be contingent on circumstance, Peirce and James conceptualised final truth as something only established by the future, final settlement of all opinion.[113] Critics have accused pragmatism falling victim to a simple fallacy: because something that is true proves useful, that usefulness is the basis for its truth.[114] Thinkers in the pragmatist tradition have included John Dewey, George Santayana, Quine and C. I. Lewis. Pragmatism has more recently been taken in new directions by Richard Rorty, John Lachs, Donald Davidson, Susan Haack, and Hilary Putnam.
Edmund Husserl's phenomenology was an ambitious attempt to lay the foundations for an account of the structure of conscious experience in general.[115] An important part of Husserl's phenomenological project was to show that all conscious acts are directed at or about objective content, a feature that Husserl called intentionality.[116]
+
In the first part of his two-volume work, the Logical Investigations (1901), he launched an extended attack on psychologism. In the second part, he began to develop the technique of descriptive phenomenology, with the aim of showing how objective judgments are grounded in conscious experience—not, however, in the first-person experience of particular individuals, but in the properties essential to any experiences of the kind in question.[115]
+
He also attempted to identify the essential properties of any act of meaning. He developed the method further in Ideas (1913) as transcendental phenomenology, proposing to ground actual experience, and thus all fields of human knowledge, in the structure of consciousness of an ideal, or transcendental, ego. Later, he attempted to reconcile his transcendental standpoint with an acknowledgement of the intersubjective life-world in which real individual subjects interact. Husserl published only a few works in his lifetime, which treat phenomenology mainly in abstract methodological terms; but he left an enormous quantity of unpublished concrete analyses.
+
Husserl's work was immediately influential in Germany, with the foundation of phenomenological schools in Munich and Göttingen. Phenomenology later achieved international fame through the work of such philosophers as Martin Heidegger (formerly Husserl's research assistant), Maurice Merleau-Ponty, and Jean-Paul Sartre. Through the work of Heidegger and Sartre, Husserl's focus on subjective experience influenced aspects of existentialism.
Existentialism is a term applied to the work of a number of late 19th- and 20th-century philosophers who, despite profound doctrinal differences,[117][118] shared the belief that philosophical thinking begins with the human subject—not merely the thinking subject, but the acting, feeling, living human individual.[119] In existentialism, the individual's starting point is characterized by what has been called "the existential attitude", or a sense of disorientation and confusion in the face of an apparently meaningless or absurd world.[120] Many existentialists have also regarded traditional systematic or academic philosophy, in both style and content, as too abstract and remote from concrete human experience.[121][122]
Although they did not use the term, the 19th-century philosophers Søren Kierkegaard and Friedrich Nietzsche are widely regarded as the fathers of existentialism. Their influence, however, has extended beyond existentialist thought.[123][124][125]
+
The main target of Kierkegaard's writings was the idealist philosophical system of Hegel which, he thought, ignored or excluded the inner subjective life of living human beings. Kierkegaard, conversely, held that "truth is subjectivity", arguing that what is most important to an actual human being are questions dealing with an individual's inner relationship to existence. In particular, Kierkegaard, a Christian, believed that the truth of religious faith was a subjective question, and one to be wrestled with passionately.[126][127]
+
Although Kierkegaard and Nietzsche were among his influences, the extent to which the German philosopher Martin Heidegger should be considered an existentialist is debatable. In Being and Time he presented a method of rooting philosophical explanations in human existence (Dasein) to be analysed in terms of existential categories (existentiale); and this has led many commentators to treat him as an important figure in the existentialist movement. However, in The Letter on Humanism, Heidegger explicitly rejected the existentialism of Jean-Paul Sartre.
+
Sartre became the best-known proponent of existentialism, exploring it not only in theoretical works such as Being and Nothingness, but also in plays and novels. Sartre, along with Simone de Beauvoir, represented an avowedly atheistic branch of existentialism, which is now more closely associated with their ideas of nausea, contingency, bad faith, and the absurd than with Kierkegaard's spiritual angst. Nevertheless, the focus on the individual human being, responsible before the universe for the authenticity of his or her existence, is common to all these thinkers.
Inaugurated by the linguist Ferdinand de Saussure, structuralism sought to clarify systems of signs through analyzing the discourses they both limit and make possible. Saussure conceived of the sign as being delimited by all the other signs in the system, and ideas as being incapable of existence prior to linguistic structure, which articulates thought. This led continental thought away from humanism, and toward what was termed the decentering of man: language is no longer spoken by man to express a true inner self, but language speaks man.
+
Structuralism sought the province of a hard science, but its positivism soon came under fire by poststructuralism, a wide field of thinkers, some of whom were once themselves structuralists, but later came to criticize it. Structuralists believed they could analyze systems from an external, objective standing, for example, but the poststructuralists argued that this is incorrect, that one cannot transcend structures and thus analysis is itself determined by what it examines, while the distinction between the signifier and signified was treated as crystalline by structuralists, poststructuralists asserted that every attempt to grasp the signified results in more signifiers, so meaning is always in a state of being deferred, making an ultimate interpretation impossible.
Largely Aristotelian in its approach and content, Thomism is a philosophical tradition that follows the writings of Thomas Aquinas. His work has been read, studied, and disputed since the 13th century, especially by Roman Catholics. However, Aquinas has enjoyed a revived interest since the late 19th century, among both atheists (like Philippa Foot) and theists (like Elizabeth Anscombe).[128]
+
Thomist philosophers tend to be rationalists in epistemology, as well as metaphysical realists, and virtue ethicists. Human beings are rational animals whose good can be known by reason and pursued by the will. With regard to the soul, Thomists (like Aristotle) argue that soul or psyche is real and immaterial but inseparable from matter in organisms. Soul is the form of the body. Thomists accept all four of Aristotle's causes as natural, including teleological or final causes. In this way, though Aquinas argued that whatever is in the intellect begins in the senses, natural teleology can be discerned with the senses and abstracted from nature through induction.[129]
+
Contemporary Thomism contains a diversity of philosophical styles, from Neo-Scholasticism to Existential Thomism.[130] The so-called new natural lawyers like Germain Grisez and Robert P. George have applied Thomistic legal principles to contemporary ethical debates, while cognitive neuroscientist Walter Freeman proposes that Thomism is the philosophical system explaining cognition that is most compatible with neurodynamics, in a 2008 article in the journal Mind and Matter entitled "Nonlinear Brain Dynamics and Intention According to Aquinas." So-called Analytical Thomism of John Haldane and others encourages dialogue between analytic philosophy and broadly Aristotelian philosophy of mind, psychology, and hylomorphic metaphysics.[131] Other modern or contemporary Thomists include Eleonore Stump, Alasdair MacIntyre, and John Finnis.
The term analytic philosophy roughly designates a group of philosophical methods that stress detailed argumentation, attention to semantics, use of classical logic and non-classical logics and clarity of meaning above all other criteria. Some have held that philosophical problems arise through misuse of language or because of misunderstandings of the logic of our language, while some maintain that there are genuine philosophical problems and that philosophy is continuous with science. Michael Dummett in his Origins of Analytical Philosophy makes the case for counting Gottlob Frege's The Foundations of Arithmetic as the first analytic work, on the grounds that in that book Frege took the linguistic turn, analyzing philosophical problems through language. Bertrand Russell and G.E. Moore are also often counted as founders of analytic philosophy, beginning with their rejection of British idealism, their defense of realism and the emphasis they laid on the legitimacy of analysis.
+
Russell's classic works The Principles of Mathematics,[132]On Denoting and Principia Mathematica with Alfred North Whitehead, aside from greatly promoting the use of mathematical logic in philosophy, set the ground for much of the research program in the early stages of the analytic tradition, emphasizing such problems as: the reference of proper names, whether 'existence' is a property, the nature of propositions, the analysis of definite descriptions, the discussions on the foundations of mathematics; as well as exploring issues of ontological commitment and even metaphysical problems regarding time, the nature of matter, mind, persistence and change, which Russell tackled often with the aid of mathematical logic. Russell and Moore's philosophy, in the beginning of the 20th century, developed as a critique of Hegel and his British followers in particular, and of grand systems of speculative philosophy in general, though by no means all analytic philosophers reject the philosophy of Hegel (see Charles Taylor) nor speculative philosophy. Some schools in the group include logical positivism, and ordinary language both markedly influenced by Russell and Wittgenstein's development of Logical Atomism the former positively and the latter negatively.
+
In 1921, Ludwig Wittgenstein, who studied under Russell at Cambridge, published his Tractatus Logico-Philosophicus, which gave a rigidly "logical" account of linguistic and philosophical issues. At the time, he understood most of the problems of philosophy as mere puzzles of language, which could be solved by investigating and then minding the logical structure of language. Years later, he reversed a number of the positions he set out in the Tractatus, in for example his second major work, Philosophical Investigations (1953). Investigations was influential in the development of "ordinary language philosophy," which was promoted by Gilbert Ryle, J.L. Austin, and a few others.
+
In the United States, meanwhile, the philosophy of Quine was having a major influence, with the paper Two Dogmas of Empiricism. In that paper Quine criticizes the distinction between analytic and synthetic statements, arguing that a clear conception of analyticity is unattainable. He argued for holism, the thesis that language, including scientific language, is a set of interconnected sentences, none of which can be verified on its own, rather, the sentences in the language depend on each other for their meaning and truth conditions. A consequence of Quine's approach is that language as a whole has only a thin relation to experience. Some sentences that refer directly to experience might be modified by sense impressions, but as the whole of language is theory-laden, for the whole language to be modified, more than this is required. However, most of the linguistic structure can in principle be revised, even logic, in order to better model the world.
Notable students of Quine include Donald Davidson and Daniel Dennett. The former devised a program for giving a semantics to natural language and thereby answer the philosophical conundrum "what is meaning?". A crucial part of the program was the use of Alfred Tarski's semantic theory of truth. Dummett, among others, argued that truth conditions should be dispensed with in the theory of meaning, and replaced by assertability conditions. Some propositions, on this view, are neither true nor false, and thus such a theory of meaning entails a rejection of the law of the excluded middle. This, for Dummett, entails antirealism, as Russell himself pointed out in his An Inquiry into Meaning and Truth.
+
By the 1970s there was a renewed interest in many traditional philosophical problems by the younger generations of analytic philosophers. David Lewis, Saul Kripke, Derek Parfit and others took an interest in traditional metaphysical problems, which they began exploring by the use of logic and philosophy of language. Among those problems some distinguished ones were: free will, essentialism, the nature of personal identity, identity over time, the nature of the mind, the nature of causal laws, space-time, the properties of material beings, modality, etc. In those universities where analytic philosophy has spread, these problems are still being discussed passionately. Analytic philosophers are also interested in the methodology of analytic philosophy itself, with Timothy Williamson, Wykeham Professor of Logic at Oxford, publishing recently a book entitled The Philosophy of Philosophy. Some influential figures in contemporary analytic philosophy are: Timothy Williamson, David Lewis, John Searle, Thomas Nagel, Hilary Putnam, Michael Dummett, Peter van Inwagen, Saul Kripke and Patricia Churchland. Analytic philosophy has sometimes been accused of not contributing to the political debate or to traditional questions in aesthetics. However, with the appearance of A Theory of Justice by John Rawls and Anarchy, State and Utopia by Robert Nozick, analytic political philosophy acquired respectability. Analytic philosophers have also shown depth in their investigations of aesthetics, with Roger Scruton, Nelson Goodman, Arthur Danto and others developing the subject to its current shape.
Other important applications can be found in epistemology, which aid in understanding the requisites for knowledge, sound evidence, and justified belief (important in law, economics, decision theory, and a number of other disciplines). The philosophy of science discusses the underpinnings of the scientific method and has affected the nature of scientific investigation and argumentation. As such, philosophy has fundamental implications for science as a whole. For example, the strictly empirical approach of Skinner's behaviorism affected for decades the approach of the American psychological establishment. Deep ecology and animal rights examine the moral situation of humans as occupants of a world that has non-human occupants to consider also. Aesthetics can help to interpret discussions of music, literature, the plastic arts, and the whole artistic dimension of life. In general, the various philosophies strive to provide practical activities with a deeper understanding of the theoretical or conceptual underpinnings of their fields.
+
Often philosophy is seen as an investigation into an area not sufficiently well understood to be its own branch of knowledge. For example, what were once philosophical pursuits have evolved into the modern day fields such as psychology, sociology, linguistics, and economics.
^Jenny Teichmann and Katherine C. Evans, Philosophy: A Beginner's Guide (Blackwell Publishing, 1999), p. 1: "Philosophy is a study of problems which are ultimate, abstract and very general. These problems are concerned with the nature of existence, knowledge, morality, reason and human purpose."
+
^A.C. Grayling (1999). "Editor's Introduction". In A.C. Grayling, ed. Philosophy 1: A Guide through the Subject. vol. 1. Oxford University Press. p. 1. ISBN978-0-19-875243-1. The aim of philosophical inquiry is to gain insight into questions about knowledge, truth, reason, reality, meaning, mind, and value. Other human endeavors explore aspects of these same questions, not least art and literature, but it is philosophy that mounts a direct assault upon them...
+
^Definition of "philosophy, n.". Oxford English Dictionary Online. June 2015. Oxford University Press.http://www.oed.com/view/Entry/142505?rskey=uk0M8u&result=1 (accessed August 05, 2015): "7. The study of the fundamental nature of knowledge, reality, and existence, and the basis and limits of human understanding; this considered as an academic discipline. (Now the usual sense.)
^The definition of philosophy is: "1. orig., love of, or the search for, wisdom or knowledge 2. theory or logical analysis of the principles underlying conduct, thought, knowledge, and the nature of the universe". Webster's New World Dictionary (Second College ed.).
^Anthony Quinton (1995). "The ethics of philosophical practice". In T. Honderich, ed. The Oxford Companion to Philosophy. Oxford University Press. p. 666. ISBN978-0-19-866132-0. Philosophy is rationally critical thinking, of a more or less systematic kind about the general nature of the world (metaphysics or theory of existence), the justification of belief (epistemology or theory of knowledge), and the conduct of life (ethics or theory of value). Each of the three elements in this list has a non-philosophical counterpart, from which it is distinguished by its explicitly rational and critical way of proceeding and by its systematic nature. Everyone has some general conception of the nature of the world in which they live and of their place in it. Metaphysics replaces the unargued assumptions embodied in such a conception with a rational and organized body of beliefs about the world as a whole. Everyone has occasion to doubt and question beliefs, their own or those of others, with more or less success and without any theory of what they are doing. Epistemology seeks by argument to make explicit the rules of correct belief formation. Everyone governs their conduct by directing it to desired or valued ends. Ethics, or moral philosophy, in its most inclusive sense, seeks to articulate, in rationally systematic form, the rules or principles involved.
^G & C. Merriam Co. (1913). Noah Porter, eds. Webster's Revised Unabridged Dictionary (1913 ed.). G & C. Merriam Co. p. 501. Retrieved 13 May 2012. E*pis`te*mol"o*gy (?), n. [Gr. knowledge + -logy.] The theory or science of the method or grounds of knowledge.
+
^Descartes, René (1644). The Principles of Philosophy (IX).
+
^"Idealism". philosophybasics.com. Retrieved 20 December 2011.
+
^Rodriguez-Pereyra, Gonzalo (2008). "Nominalism in Metaphysics", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link)
+
+
"The word 'Nominalism', as used by contemporary philosophers in the Anglo-American tradition, is ambiguous. In one sense, its most traditional sense deriving from the Middle Ages, it implies the rejection of universals. In another, more modern but equally entrenched sense, it implies the rejection of abstract objects"
+
+
+
^Strawson, P. F. "Conceptualism." Universals, concepts and qualities: new essays on the meaning of predicates. Ashgate Publishing, 2006.
+
^Hobbes, Thomas (1985). Leviathan. Penguin Classics.
+
^Sigmund, Paul E. (2005). The Selected Political Writings of John Locke. Norton. ISBN978-0-393-96451-6.
^For example, the multi-author Oxford History of Western Philosophy breaks the subject into eight volumes: ancient, medieval, Renaissance, two volumes covering the period 1600–1750, two volumes covering 1750–1945, and one volume on analytic philosophy since 1945. Anthony Kenny's New History of Western Philosophy is divided in four volumes: ancient, medieval, early modern (1500–1830), and later modern (1830 to the present). The more technical Cambridge History of Philosophy divides the topic into nine periods: Greek philosophy to Aristotle, Hellenistic philosophy, later Greek and early medieval, later medieval, Renaissance, three volumes for the 17th–19th centuries, and a final volume on 1870–1945.
^Giorgio Buccellati (1981), "Wisdom and Not: The Case of Mesopotamia", Journal of the American Oriental Society101 (1), p. 35-47.
+
^Giorgio Buccellati (1981), "Wisdom and Not: The Case of Mesopotamia", Journal of the American Oriental Society101 (1), p. 35-47 [43].
+
^ abEbrey, Patricia (2010). The Cambridge Illustrated History of China. Cambridge University Press. p. 42.
+
^Juergensmeyer, Mark (2005). Religion in global civil society. Oxford University Press. p. 70. ISBN978-0-19-518835-6. ...humanist philosophies such as Confucianism, which do not share a belief in divine law and do not exalt faithfulness to a higher law as a manifestation of divine will
^Cicero, Tusculan Disputations, 5.3.8–9 = Heraclides Ponticus fr. 88 Wehrli, Diogenes Laërtius 1.12, 8.8, IamblichusVP 58. Burkert attempted to discredit this ancient tradition, but it has been defended by C.J. De Vogel, Pythagoras and Early Pythagoreanism (1966), pp. 97–102, and C. Riedweg, Pythagoras: His Life, Teaching, And Influence (2005), p. 92.
+
^p 22, The Principal Upanisads, Harper Collins, 1994
^Cowell, E.B.; Gough, A.E. (1882). Sarva-Darsana Sangraha of Madhava Acharya: Review of Different Systems of Hindu Philosophy. New Delhi: Indian Books Centre/Sri Satguru Publications. p. xii. ISBN978-81-7030-875-1.
^Blackburn, Simon (1994). The Oxford Dictionary of Philosophy. Oxford: Oxford University Press.
+
^Frederick Copleston, A History of Philosophy, Volume II: From Augustine to Scotus (Burns & Oates, 1950), p. 1, dates medieval philosophy proper from the Carolingian Renaissance in the eighth century to the end of the fourteenth century, though he includes Augustine and the Patristic fathers as precursors. Desmond Henry, in Paul Edwards (ed.), The Encyclopedia of Philosophy (Macmillan, 1967), vol. 5, pp. 252–257, starts with Augustine and ends with Nicholas of Oresme in the late fourteenth century. David Luscombe, Medieval Thought (Oxford University Press, 1997), dates medieval philosophy from the conversion of Constantine in 312 to the Protestant Reformation in the 1520s. Christopher Hughes, in A.C. Grayling (ed.), Philosophy 2: Further through the Subject (Oxford University Press, 1998), covers philosophers from Augustine to Ockham. Jorge J.E. Gracia, in Nicholas Bunnin and E.P. Tsui-James (eds.), The Blackwell Companion to Philosophy, 2nd ed. (Blackwell, 2003), p. 620, identifies medieval philosophy as running from Augustine to John of St. Thomas in the seventeenth century. Anthony Kenny, A New History of Western Philosophy, Volume II: Medieval Philosophy (Oxford University Press, 2005), begins with Augustine and ends with the Lateran Council of 1512.
^Charles Schmitt and Quentin Skinner (eds.), The Cambridge History of Renaissance Philosophy (Cambridge University Press, 1988), p. 5, loosely define the period as extending "from the age of Ockham to the revisionary work of Bacon, Descartes and their contemporaries."
+
^Frederick Copleston, A History of Philosophy, Volume III: From Ockham to Suarez (The Newman Press, 1953) p. 18: "When one looks at Renaissance philosophy … one is faced at first sight with a rather bewildering assortment of philosophies."
+
^Brian Copenhaver and Charles Schmitt, Renaissance Philosophy (Oxford University Press, 1992), p. 4: "one may identify the hallmark of Renaissance philosophy as an accelerated and enlarged interest, stimulated by newly available texts, in primary sources of Greek and Roman thought that were previously unknown or partially known or little read."
+
^Jorge J.E. Gracia in Nicholas Bunnin and E.P. Tsui-James (eds.), The Blackwell Companion to Philosophy, 2nd ed. (Blackwell, 2002), p. 621: "the humanists … restored man to the centre of attention and channeled their efforts to the recovery and transmission of classical learning, particularly in the philosophy of Plato."
+
^Copleston, ibid.: "The bulk of Renaissance thinkers, scholars and scientists were, of course, Christians … but none the less the classical revival … helped to bring to the fore a conception of autonomous man or an idea of the development of the human personality, which, though generally Christian, was more 'naturalistic' and less ascetic than the mediaeval conception."
+
^Charles B. Schmitt and Quentin Skinner (eds.), The Cambridge History of Renaissance Philosophy, pp. 61 and 63: "From Petrarch the early humanists learnt their conviction that the revival of humanae literae was only the first step in a greater intellectual renewal" […] "the very conception of philosophy was changing because its chief object was now man—man was at centre of every inquiry".
+
^ abCassirer; Kristeller; Randall, eds. (1948). "Introduction". The Renaissance Philosophy of Man. University of Chicago Press.
+
^Copenhaver and Schmitt, Renaissance Philosophy, pp. 285–328.
+
^Pico Della Mirandola, Conclusiones philosophicae, cabalisticae et theologicae; Giordano Bruno, De Magia
+
^Richard Popkin, The History of Scepticism from Savonarola to Bayle (Oxford University Press, 2003).
^Kenny, A New History of Western Philosophy, vol. 3 (Oxford University Press, 2006), p. 8: "The Lutheran Reformation […] gave new impetus to the sceptical trend."
+
^"Machiavelli appears as the first modern political thinker" Williams, Garrath. "Hobbes: Moral and Political Philosophy". Internet Encyclopedia of Philosophy.. "Machiavelli ought not really to be classified as either purely an "ancient" or a "modern," but instead deserves to be located in the interstices between the two." Nederman, Cary. "Niccolò Machiavelli". Stanford Encyclopedia of Philosophy.
+
^Copenhaver and Schmitt, Renaissance Philosophy, pp. 274–284.
+
^Schmitt and Skinner, The Cambridge History of Renaissance Philosophy, pp. 430–452.
+
^Blocker, H. Gene; Starling, Christopher L. (2001). Japanese Philosophy. SUNY Press. p. 64.
^Donald Rutherford, The Cambridge Companion to Early Modern Philosophy (Cambridge University Press, 2006), p. xiii, defines its subject thus: "what has come to be known as "early modern philosophy"—roughly, philosophy spanning the period between the end of the sixteenth century and the end of the eighteenth century, or, in terms of figures, Montaigne through Kant." Steven Nadler, A Companion to Early Modern Philosophy (Blackwell, 2002), p. 1, likewise identifies its subject as "the seventeenth and eighteenth centuries". Anthony Kenny, The Oxford History of Western Philosophy (Clarendon: Oxford University Press, 1994), p. 107, introduces "early modern philosophy" as "the writings of the classical philosophers of the seventeenth and eighteenth centuries in Europe".
+
^Steven Nadler, A Companion to Early Modern Philosophy, pp. 1–2: "By the seventeenth century […] it had become more common to find original philosophical minds working outside the strictures of the university—i.e., ecclesiastic—framework. […] by the end of the eighteenth century, [philosophy] was a secular enterprise."
+
^Anthony Kenny, A New History of Western Philosophy, vol. 3 (Oxford University Press, 2006), p. xii: "To someone approaching the early modern period of philosophy from an ancient and medieval background the most striking feature of the age is the absence of Aristotle from the philosophic scene."
+
^Donald Rutherford, The Cambridge Companion to Early Modern Philosophy (Cambridge University Press, 2006), p. 1: "epistemology assumes a new significance in the early modern period as philosophers strive to define the conditions and limits of human knowledge."
+
^Kenny, A New History of Western Philosophy, vol. 3, p. 211: "The period between Descartes and Hegel was the great age of metaphysical system-building."
+
^Kenny, A New History of Western Philosophy, vol. 3, pp. 179–180: "the seventeenth century saw the gradual separation of the old discipline of natural philosophy into the science of physics […] [b]y the nineteenth century physics was a fully mature empirical science, operating independently of philosophy."
+
^Kenny, A New History of Western Philosophy, vol. 3, pp. 212–331.
+
^Nadler, A Companion to Early Modern Philosophy, pp. 2–3: "Why should the early modern period in philosophy begin with Descartes and Bacon, for example, rather than with Erasmus and Montaigne? […] Suffice it to say that at the beginning of the seventeenth century, and especially with Bacon and Descartes, certain questions and concerns come to the fore—a variety of issues that motivated the inquiries and debates that would characterize much philosophical thinking for the next two centuries."
+
^"Hobbes: Moral and Political Philosophy". Internet Encyclopedia of Philosophy. "Hobbes is the founding father of modern political philosophy. Directly or indirectly, he has set the terms of debate about the fundamentals of political life right into our own times."
^Rutherford, The Cambridge Companion to Early Modern Philosophy, p. 1: "Most often this [period] has been associated with the achievements of a handful of great thinkers: the so-called 'rationalists' (Descartes, Spinoza, Leibniz) and 'empiricists' (Locke, Berkeley, Hume), whose inquiries culminate in Kant's 'Critical philosophy.' These canonical figures have been celebrated for the depth and rigor of their treatments of perennial philosophical questions..."
+
^Nadler, A Companion to Early Modern Philosophy, p. 2: "The study of early modern philosophy demands that we pay attention to a wide variety of questions and an expansive pantheon of thinkers: the traditional canonical figures (Descartes, Spinoza, Leibniz, Locke, Berkeley, and Hume), to be sure, but also a large 'supporting cast'..."
+
^Bruce Kuklick, "Seven Thinkers and How They Grew: Descartes, Spinoza, Leibniz; Locke, Berkeley, Hume; Kant" in Rorty, Schneewind, and Skinner (eds.), Philosophy in History (Cambridge University Press, 1984), p. 125: "Literary, philosophical, and historical studies often rely on a notion of what is canonical. In American philosophy scholars go from Jonathan Edwards to John Dewey; in American literature from James Fenimore Cooper to F. Scott Fitzgerald; in political theory from Plato to Hobbes and Locke […] The texts or authors who fill in the blanks from A to Z in these, and other intellectual traditions, constitute the canon, and there is an accompanying narrative that links text to text or author to author, a 'history of' American literature, economic thought, and so on. The most conventional of such histories are embodied in university courses and the textbooks that accompany them. This essay examines one such course, the History of Modern Philosophy, and the texts that helped to create it. If a philosopher in the United States were asked why the seven people in my title comprise Modern Philosophy, the initial response would be: they were the best, and there are historical and philosophical connections among them."
+
^Rutherford, The Cambridge Companion to Early Modern Philosophy, p. 1.
+
^Kenny, A New History of Western Philosophy, vol. 3, p. xiii.
+
^Nadler, A Companion to Early Modern Philosophy, p. 3.
+
^Shand, John (ed.) Central Works of Philosophy, Vol.3 The Nineteenth Century (McGill-Queens, 2005)
+
^Thomas Baldwin (ed.), The Cambridge History of Philosophy 1870–1945 (Cambridge University Press, 2003), p. 4: "by the 1870s Germany contained much of the best universities in the world. […] There were certainly more professors of philosophy in Germany in 1870 than anywhere else in the world, and perhaps more even than everywhere else put together."
+
^Beiser, Frederick C. The Cambridge Companion to Hegel, (Cambridge, 1993).
+
^Baldwin (ed.), The Cambridge History of Philosophy 1870–1945, p. 119: "within a hundred years of the first stirrings in the early nineteenth century [logic] had undergone the most fundamental transformation and substantial advance in its history."
+
^Scott Soames, Philosophical Analysis in the Twentieth Century, vol. 2, p. 463.
+
^Stanford Encyclopedia of Philosophy, "Bertrand Russell", 1 May 2003: "Russell is generally recognized as one of the founders of modern analytic philosophy. […] he is regularly credited with being one of the most important logicians of the twentieth century."
+
^Paul Edwards (ed.), Encyclopedia of Philosophy, vol. 7 (Macmillan, 1967), p. 239: "Russell has exercised an influence on the course of Anglo-American philosophy in the twentieth century second to that of no other individual."
+
^Thomas Baldwin (ed.), The Cambridge History of Philosophy 1870–1945 (Cambridge University Press, 2003), p. 376: "[…] the three greatest European philosophers of the twentieth century—Heidegger, Russell, and Wittgenstein."
+
^Avrum Stroll, Twentieth-Century Analytic Philosophy (Columbia University Press, 2000), p. 252: "More than any other analytic philosopher, [Wittgenstein] has changed the thinking of a whole generation."
+
^"Wittgenstein, Ludwig" in the Internet Encyclopedia of Philosophy: "Ludwig Wittgenstein is one of the most influential philosophers of the twentieth century, and regarded by some as the most important since Immanuel Kant."
+
^Thomas Baldwin, Contemporary Philosophy (Oxford University Press, 2001), p. 90: "[Quine] has been, without question, the most influential American philosopher of the second half of the twentieth century."
+
^Peter Hylton, "Quine", in the Stanford Encyclopedia of Philosophy: "Quine's work has been extremely influential and has done much to shape the course of philosophy in the second-half of the twentieth century and into the twenty-first."
+
^Andrew Bailey, First Philosophy: Knowledge and Reality (Broadview Press, 2004), p. 274: "Willard Van Orman Quine (1908–2000) was uncontroversially one of the most important philosophers of the twentieth century."
+
^Anthony Kenny, Philosophy in the Modern World (Oxford University Press, 2007), p. 64: "After Wittgenstein's death many people regarded W.V.O. Quine (1908–2000) as the doyen of Anglophone philosophy."
+
^Stanford Encyclopedia of Philosophy [2]: "David Lewis (1941–2001) was one of the most important philosophers of the 20th Century. He made significant contributions to philosophy of language, philosophy of mathematics, philosophy of science, decision theory, epistemology, meta-ethics and aesthetics. In most of these fields he is essential reading; in many of them he is among the most important figures of recent decades. And this list leaves out his two most significant contributions."
+
^John Perry, Michael Bratman, John Martin Fischer (eds.), Introduction to Philosophy: Classical and Contemporary Readings, 4th ed. (Oxford University Press, 2006), p. 302: "David Lewis (1941–2001) was one of the most important philosophers of the twentieth century."
+
^"Edmund Husserl", in The Stanford Encyclopedia of Philosophy: "Edmund Husserl was the principal founder of phenomenology—and thus one of the most influential philosophers of the 20th century."
+
^"Husserl, Edmund", in the Internet Encyclopedia of Philosophy: "he is arguably one of the most important and influential philosophers of the twentieth century."
+
^Raymond Geuss, in Thomas Baldwin (ed.), The Cambridge History of Philosophy 1870–1945 (Cambridge University Press, 2003), p. 497: "Heidegger is by a wide margin the single most influential philosopher of the twentieth century."
+
^"Heidegger, Martin", in the Internet Encyclopedia of Philosophy: "Martin Heidegger is widely acknowledged to be one of the most original and important philosophers of the 20th century".
+
^Kant, Immanuel (1990). Critique of Pure Reason. Prometheus Books. ISBN978-0-87975-596-6.
+
^Peirce, C. S. (1878), "How to Make Our Ideas Clear", Popular Science Monthly, v. 12, 286–302. Reprinted often, including Collected Papers v. 5, paragraphs 388–410 and Essential Peirce v. 1, 124–41. See end of §II for the pragmatic maxim. See third and fourth paragraphs in §IV for the discoverability of truth and the real by sufficient investigation. Also see quotes from Peirce from across the years in the entries for "Truth" and "Pragmatism, Maxim of..." in the Commens Dictionary of Peirce's Terms, Mats Bergman and Sami Paavola, editors, University of Helsinki.
+
^Peirce on p. 293 of "How to Make Our Ideas Clear", Popular Science Monthly, v. 12, pp. 286–302. Reprinted widely, including Collected Papers of Charles Sanders Peirce (CP) v. 5, paragraphs 388–410.
+
^Rorty, Richard (1982). The Consequences of Pragmatism. Minnesota: Minnesota University Press. p. xvi.
+
^Putnam, Hilary (1995). Pragmatism: An Open Question. Oxford: Blackwell. pp. 8–12.
+
^Pratt, J. B. (1909). What is Pragmatism?. New York: Macmillan. p. 89.
+
^ abWoodruff Smith, David (2007). Husserl. Routledge.
+
^Dreyfus, Hubert (2006). A Companion to Phenomenology and Existentialism. Blackwell.
+
^John Macquarrie, Existentialism, New York (1972), pages 18–21.
+
^Oxford Companion to Philosophy, ed. Ted Honderich, New York (1995), page 259.
+
^John Macquarrie, Existentialism, New York (1972), pages 14–15.
+
^Robert C. Solomon, Existentialism (McGraw-Hill, 1974, pages 1–2)
+
^Ernst Breisach, Introduction to Modern Existentialism, New York (1962), page 5
+
^Walter Kaufmann, Existentialism: From Dostoevesky to Sartre, New York (1956) page 12
+
^Matustik, Martin J. (1995). Kierkegaard in Post/Modernity. Indiana University Press. ISBN978-0-253-20967-2.
Bullock, Alan, R. B. Woodings, and John Cumming, eds. The Fontana Dictionary of Modern Thinkers, in series, Fontana Original[s]. Hammersmith, Eng.: Fontana Press, 1992, cop. 1983. xxv, 867 p. ISBN 978-0-00-636965-3
+
Bullock, Alan, and Oliver Stallybrass, jt. eds. The Harper Dictionary of Modern Thought. New York: Harper & Row, 1977. xix, 684 p. N.B.: "First published in England under the title, The Fontana Dictionary of Modern Thought." ISBN 978-0-06-010578-5
+
Julia, Didier. Dictionnaire de la philosophie. Responsible éditorial, Emmanuel de Waresquiel; secretariat de rédaction, Joelle Narjollet. [Éd. rev.]. Paris: Larousse, 2006. 301, [1] p. + xvi p. of ill. ISBN 978-2-03-583309-9
+
Reese, W. L.Dictionary of Philosophy and Religion: Eastern and Western Thought. Atlantic Highlands, N.J.: Humanities Press, 1980. iv, 644 p. ISBN 978-0-391-00688-1
Classics of Philosophy (Vols. 1 & 2, 2nd edition) by Louis P. Pojman
+
Classics of Philosophy: The 20th Century (Vol. 3) by Louis P. Pojman
+
The English Philosophers from Bacon to Mill by Edwin Arthur
+
European Philosophers from Descartes to Nietzsche by Monroe Beardsley
+
Contemporary Analytic Philosophy: Core Readings by James Baillie
+
Existentialism: Basic Writings (Second Edition) by Charles Guignon, Derk Pereboom
+
The Phenomenology Reader by Dermot Moran, Timothy Mooney
+
Medieval Islamic Philosophical Writings edited by Muhammad Ali Khalidi
+
A Source Book in Indian Philosophy by Sarvepalli Radhakrishnan, Charles A. Moore
+
Kim, J. and Ernest Sosa, Ed. (1999). Metaphysics: An Anthology. Blackwell Philosophy Anthologies. Oxford, Blackwell Publishers Ltd.
+
The Oxford Handbook of Free Will (2004) edited by Robert Kane
+
Husserl, Edmund and Donn Welton (1999). The Essential Husserl: Basic Writings in Transcendental Phenomenology, Indiana University Press. ISBN 978-0-253-21273-3
+
Cottingham, John. Western Philosophy: An Anthology. 2nd ed. Malden, MA: Blackwell Pub., 2008. Print. Blackwell Philosophy Anthologies.
The earliest known programmable machine preceded the invention of the digital computer and is the automatic flute player described in the 9th century by the brothers Musa in Baghdad, at the time a major centre of knowledge.[1] From the early 1800s, "programs" were used to direct the behavior of machines such as Jacquard looms and player pianos.[2] Thousands of different programming languages have been created, mainly in the computer field, and many more still are being created every year. Many programming languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform), while other languages use other forms of program specification such as the declarative form (i.e. the desired result is specified, not how to achieve it).
+
The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages (such as Perl) have a dominant implementation that is treated as a reference. Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common.
A programming language is a notation for writing programs, which are specifications of a computation or algorithm.[3] Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms.[3][4] Traits often considered important for what constitutes a programming language include:
+
+
Function and target
+
A computer programming language is a language used to write computer programs, which involve a computer performing some kind of computation[5] or algorithm and possibly control external devices such as printers, disk drives, robots,[6] and so on. For example, PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.[7] In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way.[8] Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.
+
Abstractions
+
Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle;[9] this principle is sometimes formulated as recommendation to the programmer to make proper use of such abstractions.[10]
Markup languages like XML, HTML or troff, which define structured data, are not usually considered programming languages.[13][14][15] Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialect.[16][17][18] Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.[19][20]
+
The term computer language is sometimes used interchangeably with programming language.[21] However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages.[22] In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[23]
+
Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[24]John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[25]
The earliest computers were often programmed without the help of a programming language, by writing programs in absolute machine language. The programs, in decimal or binary form, were read in from punched cards or magnetic tape, or toggled in on switches on the front panel of the computer. Absolute machine languages were later termed first-generation programming languages.
The first high-level programming languages, or third-generation programming languages (3GL), were written in the 1950s. An early high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945. However, it was not implemented until 1998 and 2000.[26]
+
John Mauchly's Short Code, proposed in 1949, was one of the first high-level languages ever developed for an electronic computer.[27] Unlike machine code, Short Code statements represented mathematical expressions in understandable form. However, the program had to be translated into machine code every time it ran, making the process much slower than running the equivalent machine code.
The second autocode was developed for the Mark 1 by R. A. Brooker in 1954 and was called the "Mark 1 Autocode". Brooker also developed an autocode for the Ferranti Mercury in the 1950s in conjunction with the University of Manchester. The version for the EDSAC 2 was devised by D. F. Hartley of University of Cambridge Mathematical Laboratory in 1961. Known as EDSAC 2 Autocode, it was a straight development from Mercury Autocode adapted for local circumstances, and was noted for its object code optimisation and source-language diagnostics which were advanced for the time. A contemporary but separate thread of development, Atlas Autocode was developed for the University of Manchester Atlas 1 machine.
Another early programming language was devised by Grace Hopper in the US, called FLOW-MATIC. It was developed for the UNIVAC I at Remington Rand during the period from 1955 until 1959. Hopper found that business data processing customers were uncomfortable with mathematical notation, and in early 1955, she and her team wrote a specification for an English programming language and implemented a prototype.[34] The FLOW-MATIC compiler became publicly available in early 1958 and was substantially complete in 1959.[35] Flow-Matic was a major influence in the design of COBOL, since only it and its direct descendant AIMACO were in actual use at the time.[36]
The increased use of high-level languages introduced a requirement for low-level programming languages or system programming languages. These languages, to varying degrees, provide facilities between assembly languages and high-level languages, and can be used to perform tasks which require direct access to hardware facilities but still provide higher-level control structures and error-checking.
+
The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use:
ALGOL refined both structured procedural programming and the discipline of language specification; the "Revised Report on the Algorithmic Language ALGOL 60" became a model for how later language specifications were written.
In the 1960s, Simula was the first language designed to support object-oriented programming; in the mid-1970s, Smalltalk followed with the first "purely" object-oriented language.
+
C was developed between 1969 and 1973 as a system programming language for the Unix operating system, and remains popular.[38]
Each of these languages spawned descendants, and most modern programming languages count at least one of them in their ancestry.
+
The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether programming languages should be designed to support it.[39]Edsger Dijkstra, in a famous 1968 letter published in the Communications of the ACM, argued that GOTO statements should be eliminated from all "higher level" programming languages.[40]
+A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.
+
+
+
The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language derived from Pascal and intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation" languages that incorporated logic programming constructs.[41] The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decades.
+
One important trend in language design for programming large-scale systems during the 1980s was an increased focus on the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, which were often wedded to generic programming constructs.[42]
+
The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic websites. Java came to be used for server-side programming, and bytecode virtual machines became popular again in commercial settings with their promise of "Write once, run anywhere" (UCSD Pascal had been popular for a time in the early 1980s). These developments were not fundamentally novel, rather they were refinements of many existing languages and paradigms (although their syntax was often based on the C family of programming languages).
+
Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration such as Microsoft's LINQ.
+
Fourth-generation programming languages (4GL) are a computer programming languages which aim to provide a higher level of abstraction of the internal computer hardware details than 3GLs. Fifth generation programming languages (5GL) are programming languages based on solving problems using constraints given to the program, rather than using an algorithm written by a programmer.
All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.
A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.
+
The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.
+expression ::= atom | list
+atom ::= number | symbol
+number ::= [+-]?['0'-'9']+
+symbol ::= ['A'-'Z''a'-'z'].*
+list ::= '(' expression* ')'
+
+
This grammar specifies the following:
+
+
an expression is either an atom or a list;
+
an atom is either a number or a symbol;
+
a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;
+
a symbol is a letter followed by zero or more of any characters (excluding whitespace); and
+
a list is a matched pair of parentheses, with zero or more expressions inside it.
+
+
The following are examples of well-formed token sequences in this grammar: 12345, () and (a b c232 (1)).
+
Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.
+
Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:
"John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.
+
+
The following C language fragment is syntactically correct, but performs operations that are not semantically defined (the operation *p >> 4 has no meaning for a value having a complex type and p->im is not defined because the value of p is the null pointer):
If the type declaration on the first line were omitted, the program would trigger an error on compilation, as the variable "p" would not be defined. But the program would still be syntactically correct, since type declarations provide only semantic information.
+
The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[43] Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution.[44] In contrast to Lisp's macro system and Perl's BEGIN blocks, which may contain general computations, C macros are merely string replacements, and do not require code execution.[45]
The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[3] For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct.[46] Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or that subroutine calls have the appropriate number and type of arguments, can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Newer programming languages like Java and C# have definite assignment analysis, a form of data flow analysis, as part of their static semantics.
Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The dynamic semantics (also known as execution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research went into formal semantics of programming languages, which allow execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.
A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, infer types, relieving the programmer from the need to write type annotations. The formal design and study of type systems is known as type theory.
A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types.[47] For example, the data represented by "this text between the quotes" is a string, and in many programming languages dividing a number by a string has no meaning and will be rejected by the compilers. The invalid operation may be detected when the program is compiled ("static" type checking) and will be rejected by the compiler with a compilation error message, or it may be detected when the program is run ("dynamic" type checking), resulting in a run-time exception. Many languages allow a function called an exception handler to be written to handle this exception and, for example, always return "-1" as the result.
+
A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type—most commonly character strings which are used for both symbolic and numeric data.
+
In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths.[47] High-level languages which are untyped include BCPL, Tcl, and some varieties of Forth.
+
In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[47] Many production languages provide means to bypass or subvert the type system, trading type-safety for finer control over the program's execution (see casting).
In static typing, all expressions have their types determined prior to when the program is executed, typically at compile-time. For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.[47]
+
Statically typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.[48] Additionally, some programming languages allow for some types to be automatically converted to other types; for example, an int can be used where the program expects a float.
+
Dynamic typing, also called latent typing, determines the type-safety of operations at run time; in other words, types are associated with run-time values rather than textual expressions.[47] As with type-inferred languages, dynamically typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, potentially making debugging more difficult. Lisp, Smalltalk, Perl, Python, JavaScript, and Ruby are dynamically typed.
Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[47] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run time.
+
Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.[47] Strongly typed languages are often termed type-safe or safe.
+
An alternative definition for "weakly typed" refers to languages, such as Perl and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors. Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[49][50]
+
It may seem odd to some professional programmers that C could be "weakly, statically typed". However, notice that the use of the generic pointer, the void* pointer, does allow for casting of pointers to other pointers without needing to do an explicit cast. This is extremely similar to somehow casting an array of bytes to any kind of datatype in C without using an explicit cast, such as (int) or (char).
Most programming languages have an associated core library (sometimes known as the 'standard library', especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.
+
The line between a language and its core library differs from language to language. In some cases, the language designers may treat the library as a separate entity from the language. However, a language's core library is often treated as part of the language by its users, and some language specifications even require that this library be made available in all implementations. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a "block") constructs an instance of the library's BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.
Programming languages share properties with natural languages related to their purpose as vehicles for communication, having a syntactic form separate from its semantics, and showing language families of related languages branching one from another.[51][52] But as artificial constructs, they also differ in fundamental ways from languages that have evolved through usage. A significant difference is that a programming language can be fully described and studied in its entirety, since it has a precise and finite definition.[53] By contrast, natural languages have changing meanings given by their users in different communities. While constructed languages are also artificial languages designed from the ground up with a specific purpose, they lack the precise and complete semantic definition that a programming language has.
+
Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse. Although there have been attempts to design one "universal" programming language that serves all purposes, all of them have failed to be generally accepted as filling this role.[54] The need for diverse programming languages arises from the diversity of contexts in which languages are used:
+
+
Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.
+
Programmers range in expertise from novices who need simplicity above all else, to experts who may be comfortable with considerable complexity.
Programs may be written once and not change for generations, or they may undergo continual modification.
+
Programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.
+
+
One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[55]
+
Natural language programming has been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as "foolish".[56]Alan Perlis was similarly dismissive of the idea.[57] Hybrid approaches have been taken in Structured English and SQL.
+
A language's designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.
The specification of a programming language is an artifact that the language users and the implementors can use to agree upon whether a piece of source code is a valid program in that language, and if so what its behavior shall be.
+
A programming language specification can take several forms, including the following:
+
+
An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., as in the C language), or a formal semantics (e.g., as in Standard ML[58] and Scheme[59] specifications).
+
A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or a formal language.
An implementation of a programming language provides a way to write programs in that language and execute them on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.
+
The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of BASIC compile and then execute the source a line at a time.
+
Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are interpreted in software.[citation needed]
+
One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.
Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonly domain specific languages or internal scripting languages for a single product; some proprietary languages are used only internally within a vendor, while others are available to external users.
+
Some programming languages exist on the border between proprietary and open; for example, Oracle Corporation asserts proprietary rights to some aspects of the Java programming language, and Microsoft's C# programming language, which has open implementations of most parts of the system, also has Common Language Runtime (CLR) as a closed environment.
+
Many proprietary languages are widely used, in spite of their proprietary nature; examples include MATLAB and VBScript. Some languages may make the transition from closed to open; for example, Erlang was originally an Ericsson's internal programming language.
Thousands of different programming languages have been created, mainly in the computing field.[61] Software is commonly built with 5 programming languages or more.[62]
+
Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by using pseudocode, which interleaves natural language with code written in a programming language.
+
A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives).[63]Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.
It is difficult to determine which programming languages are most widely used, and what usage means varies by context. One language may occupy the greater number of programmer hours, a different one have more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes;[65][66]Fortran in scientific and engineering applications; Ada in aerospace, transportation, military, real-time and embedded applications; and C in embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications.
+
Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:
+
+
counting the number of job advertisements that mention the language[67]
+
the number of books sold that teach or describe the language[68]
+
estimates of the number of existing lines of code written in the language – which may underestimate languages not often found in public searches[69]
+
counts of language references (i.e., to the name of the language) found using a web search engine.
+
+
Combining and averaging information from various internet sites, langpop.com claims that in 2013 the ten most popular programming languages are (in descending order by overall popularity): C, Java, PHP, JavaScript, C++, Python, Shell, Ruby, Objective-C and C#.[70]
There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.
+
The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.
+
In broad strokes, programming languages divide into programming paradigms and a classification by intended domain of use, with general-purpose programming languages distinguished from domain-specific programming languages. Traditionally, programming languages have been regarded as describing computation in terms of imperative sentences, i.e. issuing commands. These are generally called imperative programming languages. A great deal of research in programming languages has been aimed at blurring the distinction between a program as a set of instructions and a program as an assertion about the desired answer, which is the main feature of declarative programming.[71] More refined paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these).[72] Some general purpose languages were designed largely with educational goals.[73]
+
A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being deliberately esoteric or not.
^Koetsier, Teun (2001). On the prehistory of programmable machines; musical automata, looms, calculators. PERGAMON, Mechanisma and Machine Theory 36. pp. 589–603.
+
^Ettinger, James (2004) Jacquard's Web, Oxford University Press
^In mathematical terms, this means the programming language is Turing-complete MacLennan, Bruce J. (1987). Principles of Programming Languages. Oxford University Press. p. 1. ISBN0-19-511306-3.
+
^ACM SIGPLAN (2003). "Bylaws of the Special Interest Group on Programming Languages of the Association for Computing Machinery". Retrieved 19 June 2006., The scope of SIGPLAN is the theory, design, implementation, description, and application of computer programming languages - languages that permit the specification of a variety of different computations, thereby providing the user with significant control (immediate or delayed) over the computer's operation.
+
^Dean, Tom (2002). "Programming Robots". Building Intelligent Robots. Brown University Department of Computer Science. Retrieved 23 September 2006.
+
^R. Narasimahan, Programming Languages and Computers: A Unified Metatheory, pp. 189--247 in Franz Alt, Morris Rubinoff (eds.) Advances in computers, Volume 8, Academic Press, 1994, ISBN 0-12-012108-5, p.193 : "a complete specification of a programming language must, by definition, include a specification of a processor--idealized, if you will--for that language." [the source cites many references to support this statement]
+
^Ben Ari, Mordechai (1996). Understanding Programming Languages. John Wiley and Sons. Programs and languages can be defined as purely formal mathematical objects. However, more people are interested in programs than in other mathematical objects such as groups, precisely because it is possible to use the program—the sequence of symbols—to control the execution of a computer. While we highly recommend the study of the theory of programming, this text will generally limit itself to the study of programs as they are executed on a computer.
+
^David A. Schmidt, The structure of typed programming languages, MIT Press, 1994, ISBN 0-262-19349-3, p. 32
+
^Pierce, Benjamin (2002). Types and Programming Languages. MIT Press. p. 339. ISBN0-262-16209-1.
^The Charity Development Group (December 1996). "The CHARITY Home Page". Retrieved 29 June 2006., Charity is a categorical programming language..., All Charity computations terminate.
^Powell, Thomas (2003). HTML & XHTML: the complete reference. McGraw-Hill. p. 25. ISBN0-07-222942-X. HTML is not a programming language.
+
^Dykes, Lucinda; Tittel, Ed (2005). XML For Dummies, 4th Edition. Wiley. p. 20. ISBN0-7645-8845-1. ...it's a markup language, not a programming language.
^Scott, Michael (2006). Programming Language Pragmatics. Morgan Kaufmann. p. 802. ISBN0-12-633951-1. XSLT, though highly specialized to the transformation of XML, is a Turing-complete programming language.
^Syropoulos, Apostolos; Antonis Tsolomitis; Nick Sofroniou (2003). Digital typography using LaTeX. Springer-Verlag. p. 213. ISBN0-387-95217-9. TeX is not only an excellent typesetting engine but also a real programming language.
+
^Robert A. Edmunds, The Prentice-Hall standard glossary of computer terminology, Prentice-Hall, 1985, p. 91
^S.K. Bajpai, Introduction To Computers And C Programming, New Age International, 2007, ISBN 81-224-1379-X, p. 346
+
^R. Narasimahan, Programming Languages and Computers: A Unified Metatheory, pp. 189--247 in Franz Alt, Morris Rubinoff (eds.) Advances in computers, Volume 8, Academic Press, 1994, ISBN 0-12-012108-5, p.215: "[...] the model [...] for computer languages differs from that [...] for programming languages in only two respects. In a computer language, there are only finitely many names--or registers--which can assume only finitely many values--or states--and these states are not further distinguished in terms of any other attributes. [author's footnote:] This may sound like a truism but its implications are far reaching. For example, it would imply that any model for programming languages, by fixing certain of its parameters or features, should be reducible in a natural way to a model for computer languages."
+
^John C. Reynolds, Some thoughts on teaching programming and programming languages, SIGPLAN Notices, Volume 43, Issue 11, November 2008, p.109
+
^Rojas, Raúl, et al. (2000). "Plankalkül: The First High-Level Programming Language and its Implementation". Institut für Informatik, Freie Universität Berlin, Technical Report B-3/2000. (full text)
+
^Sebesta, W.S Concepts of Programming languages. 2006;M6 14:18 pp.44. ISBN 0-321-33025-0
+
^Knuth, Donald E.; Pardo, Luis Trabb. "Early development of programming languages". Encyclopedia of Computer Science and Technology (Marcel Dekker) 7: 419–493.
^Richard L. Wexelblat: History of Programming Languages, Academic Press, 1981, chapter XIV.
+
^François Labelle. "Programming Language Usage Graph". SourceForge. Retrieved 21 June 2006.. This comparison analyzes trends in number of projects hosted by a popular community programming repository. During most years of the comparison, C leads by a considerable margin; in 2006, Java overtakes C, but the combination of C/C++ still leads considerably.
+
^Hayes, Brian (2006). "The Semicolon Wars". American Scientist94 (4): 299–303. doi:10.1511/2006.60.299.
^Tetsuro Fujise, Takashi Chikayama, Kazuaki Rokusawa, Akihiko Nakase (December 1994). "KLIC: A Portable Implementation of KL1" Proc. of FGCS '94, ICOT Tokyo, December 1994. http://www.icot.or.jp/ARCHIVE/HomePage-E.html KLIC is a portable implementation of a concurrent logic programming language KL1.
^Jeffrey Kegler, "Perl and Undecidability", The Perl Review. Papers 2 and 3 prove, using respectively Rice's theorem and direct reduction to the halting problem, that the parsing of Perl programs is in general undecidable.
^Specifically, instantiations of generic types are inferred for certain expression forms. Type inference in Generic Java—the research language that provided the basis for Java 1.5's bounded parametric polymorphism extensions—is discussed in two informal manuscripts from the Types mailing list: Generic Java type inference is unsound (Alan Jeffrey, 17 December 2001) and Sound Generic Java type inference (Martin Odersky, 15 January 2002). C#'s type system is similar to Java's, and uses a similar partial type inference scheme.
^IBM in first publishing PL/I, for example, rather ambitiously titled its manual The universal programming language PL/I (IBM Library; 1966). The title reflected IBM's goals for unlimited subsetting capability: PL/I is designed in such a way that one can isolate subsets from it satisfying the requirements of particular applications. ("PL/I". Encyclopedia of Mathematics. Retrieved 29 June 2006.). Ada and UNCOL had similar early goals.
+
^Frederick P. Brooks, Jr.: The Mythical Man-Month, Addison-Wesley, 1982, pp. 93-94
^Kelsey, Richard; William Clinger; Jonathan Rees (February 1998). "Section 7.2 Formal semantics". Revised5 Report on the Algorithmic Language Scheme. Retrieved 9 June 2006.
^Mayer, Philip; Bauer, Alexander (1 January 2015). "An Empirical Analysis of the Utilization of Multiple Programming Languages in Open Source Projects". EASE '15. New York, NY, USA: ACM: 4:1–4:10. doi:10.1145/2745802.2745805. ISBN978-1-4503-3350-4. Retrieved 18 September 2015. Results: We found (a) a mean number of 5 languages per project with a clearly dominant main general-purpose language and 5 often-used DSL types, (b) a significant influence of the size, number of commits, and the main language on the number of languages as well as no significant influence of age and number of contributors, and (c) three language ecosystems grouped around XML, Shell/Make, and HTML/CSS. Conclusions: Multi-language programming seems to be common in open-source projects and is a factor which must be dealt with in tooling and when assessing development and maintenance of such software systems.
^Bieman, J.M.; Murdock, V., Finding code on the World Wide Web: a preliminary investigation, Proceedings First IEEE International Workshop on Source Code Analysis and Manipulation, 2001
In modern philosophy and mathematics, a property is a characteristic of an object; a red object is said to have the property of redness. The property may be considered a form of object in its own right, able to possess other properties. A property however differs from individual objects in that it may be instantiated, and often in more than one thing. It differs from the logical/mathematical concept of class by not having any concept of extensionality, and from the philosophical concept of class in that a property is considered to be distinct from the objects which possess it. Understanding how different individual entities (or particulars) can in some sense have some of the same properties is the basis of the problem of universals. The terms attribute and quality have similar meanings.
In classical Aristotelian terminology, a property (Greek: idion, Latin: proprium) is one of the predicables. It is a non-essential quality of a species (like an accident), but a quality which is nevertheless characteristically present in members of that species (and in no others). For example, "ability to laugh" may be considered a special characteristic of human beings. However, "laughter" is not an essential quality of the species human, whose Aristotelian definition of "rational animal" does not require laughter. Thus, in the classical framework, properties are characteristic, but non-essential, qualities.
A property may be classified as either determinate or determinable. A determinable property is one that can get more specific. For example, color is a determinable property because it can be restricted to redness, blueness, etc.[1] A determinate property is one that cannot become more specific. This distinction may be useful in dealing with issues of identity.[2]
Daniel Dennett distinguishes between lovely properties (such as loveliness itself), which, although they require an observer to be recognised, exist latently in perceivable objects; and suspect properties which have no existence at all until attributed by an observer (such as being a suspect in a murder enquiry)[3]
+Property dualism: the exemplification of two kinds of property by one kind of substance
+
+
+
Property dualism describes a category of positions in the philosophy of mind which hold that, although the world is constituted of just one kind of substance—the physical kind—there exist two distinct kinds of properties: physical properties and mental properties. In other words, it is the view that non-physical, mental properties (such as beliefs, desires and emotions) inhere in some physical substances (namely brains).
In mathematical terminology, a property p defined for all elements of a set X is usually defined as a function p: X → {true, false}, that is true whenever the property holds; or equivalently, as the subset of X for which p holds; i.e. the set {x| p(x) = true}; p is its indicator function. It may be objected (see above) that this defines merely the extension of a property, and says nothing about what causes the property to hold for exactly those values.
The ontological fact that something has a property is typically represented in language by applying a predicate to a subject. However, taking any grammatical predicate whatsoever to be a property, or to have a corresponding property, leads to certain difficulties, such as Russell's paradox and the Grelling–Nelson paradox. Moreover, a real property can imply a host of true predicates: for instance, if X has the property of weighing more than 2 kilos, then the predicates "..weighs more than 1.9 kilos", "..weighs more than 1.8 kilos", etc., are all true of it. Other predicates, such as "is an individual", or "has some properties" are uninformative or vacuous. There is some resistance to regarding such so-called "Cambridge properties" as legitimate.[4]
An intrinsic property is a property that an object or a thing has of itself, independently of other things, including its context. An extrinsic (or relational) property is a property that depends on a thing's relationship with other things. For example, mass is a physical intrinsic property of any physical object, whereas weight is an extrinsic property that varies depending on the strength of the gravitational field in which the respective object is placed.
A relation is often considered[by whom?] to be a more general case of a property. Relations are true of several particulars, or shared amongst them. Thus the relation ".. is taller than .." holds "between" two individuals, who would occupy the two ellipses ('..'). Relations can be expressed by N-place predicates, where N is greater than 1.
+
It is widely accepted[by whom?] that there are at least some apparent relational properties which are merely derived from non-relational (or 1-place) properties. For instance "A is heavier than B" is a relational predicate, but it is derived from the two non relational properties: the mass of A and the mass of B. Such relations are called external relations, as opposed to the more genuine internal relations.[5] Some philosophers believe that all relations are external, leading to a scepticism about relations in general, on the basis that external relations have no fundamental existence.
This article is about quality in the philosophical sense. For other uses, see Quality (disambiguation).
+
A quality (from Latinqualitas)[1] is an attribute or a property.[2] In contemporary philosophy the idea of qualities, and especially how to distinguish certain kinds of qualities from one another, remains controversial.[2]
Aristotle analyzed qualities in his logical work, the Categories. To him, qualities are hylomorphically–formal attributes, such as "white" or "grammatical". Categories of state, such as "shod" and "armed" are also non–essential qualities (katà symbebekós).[3] Aristotle observed: "one and the selfsame substance, while retaining its identity, is yet capable of admitting contrary qualities. The same individual person is at one time white, at another black, at one time warm, at another cold, at one time good, at another bad. This capacity is found nowhere else... it is the peculiar mark of substance that it should be capable of admitting contrary qualities; for it is by itself changing that it does so".[4] Aristotle described four types of qualitative opposites: correlatives,contraries,privatives and positives.[5]
+
John Locke presented a distinction between primary and secondary qualities in An Essay Concerning Human Understanding. For Locke, a quality is an idea of a sensation or a perception. Locke further asserts that qualities can be divided in two kinds: primary and secondary qualities. Primary qualities are intrinsic to an object—a thing or a person—whereas secondary qualities are dependent on the interpretation of the subjective mode and the context of appearance.[2] For example, a shadow is a secondary quality. It requires a certain lighting to be applied to an object. For another example, consider the mass of an object. Weight is a secondary quality since, as a measurement of gravitational force, it varies depending on the distance to, and mass of, very massive objects like the Earth, as described by Newton's law. It could be thought that mass is intrinsic to an object, and thus a primary quality. In the context of relativity, the idea of mass quantifying an amount of matter requires caution. The relativistic mass varies for variously traveling observers; then there is the idea of rest mass or invariant mass (the magnitude of the energy-momentum 4-vector[6]), basically a system's relativistic mass in its own rest frame of reference. (Note, however, that Aristotle drew a distinction between qualification and quantification; a thing's quality can vary in degree).[7] Only an isolated system's invariant mass in relativity is the same as observed in variously traveling observers' rest frames, and conserved in reactions; moreover, a system's heat, including the energy of its massless particles such as photons, contributes to the system's invariant mass (indeed, otherwise even an isolated system's invariant mass would not be conserved in reactions); even a cloud of photons traveling in different directions has, as a whole, a rest frame and a rest energy equivalent to invariant mass.[8] Thus, to treat rest mass (and by that stroke, rest energy) as an intrinsic quality distinctive of physical matter raises the question of what is to count as physical matter. Little of the invariant mass of a hadron (for example a proton or a neutron) consists in the invariant masses of its component quarks (in a proton, around 1%) apart from their gluonparticle fields; most of it consists in the quantum chromodynamics binding energy of the (massless) gluons (see Quark#Mass).
+
Conceptions of quality as metaphysical and ontological[edit]
^Studtmann,P. (2007). Zalta, E.N., ed. "Aristotle's Categories". Stanford: Stanford Encyclopedia of Philosophy. [Regarding] Habits and Dispositions; Natural Capabilities and Incapabilities; Affective Qualities and Affections; and Shapes; [...] Ackrill finds Aristotle's division of quality at best unmotivated.
Contemporary science is typically subdivided into the natural sciences which study the material world, the social sciences which study people and societies, and the formal sciences like mathematics. The formal sciences are often excluded as they do not depend on empirical observations.[3] Disciplines which use science like engineering and medicine may also be considered to be applied sciences.[4]
In the 17th and 18th centuries scientists increasingly sought to formulate knowledge in terms of laws of nature. Over the course of the 19th century, the word "science" became increasingly associated with the scientific method itself, as a disciplined way to study the natural world. It was in the 19th century that scientific disciplines such as physics, chemistry, and biology reached their modern shapes. The same time period also included the origin of the terms "scientist" and "scientific community," the founding of scientific institutions, and increasing significance of the interactions with society and other aspects of culture.[9][10]
+An animation showing the movement of the continents from the separation of Pangaea until the present day
+
+
+
Science in a broad sense existed before the modern era, and in many historical civilizations.[nb 4]Modern science is distinct in its approach and successful in its results: 'modern science' now defines what science is in the strictest sense of the term.[11]
+
Science in its original sense is a word for a type of knowledge, rather than a specialized word for the pursuit of such knowledge. In particular it is one of the types of knowledge which people can communicate to each other and share. For example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thinking. This is shown by the construction of complex calendars, techniques for making poisonous plants edible, and buildings such as the pyramids. However no consistent conscientious distinction was made between knowledge of such things which are true in every community and other types of communal knowledge, such as mythologies and legal systems.
Before the invention or discovery of the concept of "nature" (Ancient Greekphusis), by the Pre-Socratic philosophers, the same words tend to be used to describe the natural "way" in which a plant grows,[12] and the "way" in which, for example, one tribe worships a particular god. For this reason it is claimed these men were the first philosophers in the strict sense, and also the first people to clearly distinguish "nature" and "convention".[13] Science was therefore distinguished as the knowledge of nature, and the things which are true for every community, and the name of the specialized pursuit of such knowledge was philosophy — the realm of the first philosopher-physicists. They were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature (artifice or technology, Greek technē) was seen by classical scientists as a more appropriate interest for lower class artisans.[14] A clear-cut distinction between formal (eon) and empirical science (doxa) was made by pre-Socratic philosopher Parmenides (fl. late sixth or early fifth century BCE). Although his work peri physeos is a poem, it may be viewed as an epistemological essay, an essay on method in natural science. Parmenides' ἐὸν may refer to a formal system, a calculus which can describe nature more precisely than natural languages. 'Physis' may be identical to ἐὸν.[15]
A major turning point in the history of early philosophical science was the controversial but successful attempt by Socrates to apply philosophy to the study of human things, including human nature, the nature of political communities, and human knowledge itself. He criticized the older type of study of physics as too purely speculative, and lacking in self-criticism. He was particularly concerned that some of the early physicists treated nature as if it could be assumed that it had no intelligent order, explaining things merely in terms of motion and matter. The study of human things had been the realm of mythology and tradition, and Socrates was executed.[17]Aristotle later created a less controversial systematic programme of Socratic philosophy, which was teleological, and human-centred. He rejected many of the conclusions of earlier scientists. For example, in his physics the sun goes around the earth, and many things have it as part of their nature that they are for humans. Each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, according to what types of things they are. While the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being (a study Aristotle divided into ethics and political philosophy), they did not argue for any other types of applied science.
+
Aristotle maintained the sharp distinction between science and the practical knowledge of artisans, treating theoretical speculation as the highest type of human activity, practical thinking about good living as something less lofty, and the knowledge of artisans as something only suitable for the lower classes. In contrast to modern science, Aristotle's influential emphasis was upon the "theoretical" steps of deducing universal rules from raw data, and did not treat the gathering of experience and raw data as part of science itself.[nb 5]
+Ibn al-Haytham (Alhazen), 965–1039 Iraq. The Muslim scholar who is considered by some to be the father of modern scientific methodology due to his emphasis on experimental data and reproducibility of its results.[19][nb 6]
+
+
+
During late antiquity and the early Middle Ages, the Aristotelian approach to inquiries on natural phenomena was used. Some ancient knowledge was lost, or in some cases kept in obscurity, during the fall of the Roman Empire and periodic political struggles. However, the general fields of science, or "natural philosophy" as it was called, and much of the general knowledge from the ancient world remained preserved though the works of the early Latin encyclopedists like Isidore of Seville. Also, in the Byzantine empire, many Greek science texts were preserved in Syriac translations done by groups such as Nestorians and Monophysites.[20] Many of these were translated later on into Arabic under the Caliphate, during which many types of classical learning were preserved and in some cases improved upon.[20][nb 7] The House of Wisdom was established in Abbasid-era Baghdad, Iraq.[21] It is considered to have been a major intellectual center, during the Islamic Golden Age, where Muslim scholars such as al-Kindi and Ibn Sahl in Baghdad, and Ibn al-Haytham in Cairo, flourished from the ninth to the thirteenth centuries, until the Mongol sack of Baghdad. Ibn al-Haytham, known later to the West as Alhazen, furthered the Aristotelian viewpoint,[22] by emphasizing experimental data.[nb 8][23] In the later medieval period, as demand for translations grew, for example from the Toledo School of Translators, Western Europeans began collecting texts written not only in Latin, but also Latin translations from Greek, Arabic, and Hebrew. The texts of Aristotle, Ptolemy,[nb 9] and Euclid, preserved in the Houses of Wisdom, were sought amongst Catholic scholars. In Europe, Alhazen's De Aspectibus directly influenced Roger Bacon (13th century) in England, who argued for more experimental science, as demonstrated by Alhazen. By the late Middle Ages, a synthesis of Catholicism and Aristotelianism known as Scholasticism was flourishing in Western Europe, which had become a new geographic center of science, but all aspects of scholasticism were criticized in the 15th and 16th centuries.
+Galen (129—c. 216) noted the optic chiasm is X-shaped. (Engraving from Vesalius, 1543)
+
+
+
+
+
+
+Front page of the 1572 Latin Opticae Thesaurus (optics treasury), which included Alhazen's Book of Optics, showing propagation of light, rainbows, parabolic mirrors, distorted images caused by refraction in water, and perspective.
+
+
+
Medieval science carried on the views of the Hellenist civilization of Socrates, Plato, and Aristotle, as shown by Alhazen's lost work A Book in which I have Summarized the Science of Optics from the Two Books of Euclid and Ptolemy, to which I have added the Notions of the First Discourse which is Missing from Ptolemy's Book from Ibn Abi Usaibia's catalog, as cited in (Smith 2001).:91(vol.1),p.xv Alhazen conclusively disproved Ptolemy's theory of vision.
+
+
+
+
+
+Dürer's use of optics (1525)
+
+
+
+
But Alhacen retained Aristotle's ontology; Roger Bacon, Witelo, and John Peckham each built-up a scholastic ontology upon Alhazen's Book of Optics, a causal chain beginning with sensation, perception, and finally apperception of the individual and universal forms of Aristotle.[24] This model of vision became known as Perspectivism, which was exploited and studied by the artists of the Renaissance.
+
A. Mark Smith points out the perspectivist theory of vision "is remarkably economical, reasonable, and coherent", which pivots on three of Aristotle's four causes, formal, material, and final.[25] Although Alhacen knew that a scene imaged through an aperture is inverted, he argued that vision is about perception. This was overturned by Kepler,[26]:p.102 who modelled the eye with a water-filled glass sphere, with an aperture in front of it to model the entrance pupil. He found that all the light from a single point of the scene was imaged at a single point at the back of the glass sphere. The optical chain ends on the retina at the back of the eye and the image is inverted.[nb 10]
Galileo made innovative use of experiment and mathematics. However his persecution began after Pope Urban VIII blessed Galileo to write about the Copernican system. Galileo had used arguments from the Pope and put them in the voice of the simpleton in the work "Dialogue Concerning the Two Chief World Systems" which caused great offense to him.[28]
+
In Northern Europe, the new technology of the printing press was widely used to publish many arguments including some that disagreed with church dogma. René Descartes and Francis Bacon published philosophical arguments in favor of a new type of non-Aristotelian science. Descartes argued that mathematics could be used in order to study nature, as Galileo had done, and Bacon emphasized the importance of experiment over contemplation. Bacon questioned the Aristotelian concepts of formal cause and final cause, and promoted the idea that science should study the laws of "simple" natures, such as heat, rather than assuming that there is any specific nature, or "formal cause", of each complex type of thing. This new modern science began to see itself as describing "laws of nature". This updated approach to studies in nature was seen as mechanistic. Bacon also argued that science should aim for the first time at practical inventions for the improvement of all human life.
+
Age of Enlightenment
+
In the 17th and 18th centuries, the project of modernity, as had been promoted by Bacon and Descartes, led to rapid scientific advance and the successful development of a new type of natural science, mathematical, methodically experimental, and deliberately innovative. Newton and Leibniz succeeded in developing a new physics, now referred to as Newtonian physics, which could be confirmed by experiment and explained using mathematics. Leibniz also incorporated terms from Aristotelian physics, but now being used in a new non-teleological way, for example "energy" and "potential" (modern versions of Aristotelian "energeia and potentia"). In the style of Bacon, he assumed that different types of things all work according to the same general laws of nature, with no special formal or final causes for each type of thing. It is during this period that the word "science" gradually became more commonly used to refer to a type of pursuit of a type of knowledge, especially knowledge of nature — coming close in meaning to the old term "natural philosophy".
Both John Herschel and William Whewell systematized methodology: the latter coined the term scientist. When Charles Darwin published On the Origin of Species he established descent with modification as the prevailing evolutionary explanation of biological complexity. His theory of natural selection provided a natural explanation of how species originated, but this only gained wide acceptance a century later. John Dalton developed the idea of atoms. The laws of thermodynamics and the electromagnetic theory were also established in the 19th century, which raised new questions which could not easily be answered using Newton's framework. The phenomena that would allow the deconstruction of the atom were discovered in the last decade of the 19th century: the discovery of X-rays inspired the discovery of radioactivity. In the next year came the discovery of the first subatomic particle, the electron.
Einstein's Theory of Relativity and the development of quantum mechanics led to the replacement of Newtonian physics with a new physics which contains two parts, that describe different types of events in nature.
+
In the first half of the century the development of artificial fertilizer made possible global human population growth. At the same time, the structure of the atom and its nucleus was elucidated, leading to the release of "atomic energy" (nuclear power). In addition, the extensive use of scientific innovation, stimulated by the wars of this century, led to antibiotics and increased life expectancy, revolutions in transportation (automobiles and aircraft), and the development of ICBMs, a space race, and a nuclear arms race— all giving a widespread public appreciation of the importance of modern science.
More recently, it has been argued that the ultimate purpose of science is to make sense of human beings and our nature – for example in his book Consilience, EO Wilson said "The human condition is the most important frontier of the natural sciences." [2]:334
The scientific method seeks to explain the events of nature in a reproducible way.[nb 11] An explanatory thought experiment or hypothesis is put forward, as explanation, using principles such as parsimony (also known as "Occam's Razor") and are generally expected to seek consilience—fitting well with other accepted facts related to the phenomena.[2][dubious– discuss] This new explanation is used to make falsifiable predictions that are testable by experiment or observation. The predictions are to be posted before a confirming experiment or observation is sought, as proof that no tampering has occurred. Disproof of a prediction is evidence of progress.[nb 12][nb 13] This is done partly through observation of natural phenomena, but also through experimentation, that tries to simulate natural events under controlled conditions, as appropriate to the discipline (in the observational sciences, such as astronomy or geology, a predicted observation might take the place of a controlled experiment). Experimentation is especially important in science to help establish causal relationships (to avoid the correlation fallacy).
When a hypothesis proves unsatisfactory, it is either modified or discarded.[29] If the hypothesis survived testing, it may become adopted into the framework of a scientific theory. This is a logically reasoned, self-consistent model or framework for describing the behavior of certain natural phenomena. A theory typically describes the behavior of much broader sets of phenomena than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses. In addition to testing hypotheses, scientists may also generate a model based on observed phenomena. This is an attempt to describe or depict the phenomenon in terms of a logical, physical or mathematical representation and to generate new hypotheses that can be tested.[30]
+
While performing experiments to test hypotheses, scientists may have a preference for one outcome over another, and so it is important to ensure that science as a whole can eliminate this bias.[31][32] This can be achieved by careful experimental design, transparency, and a thorough peer review process of the experimental results as well as any conclusions.[33][34] After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be.[35] Taken in its entirety, the scientific method allows for highly creative problem solving while minimizing any effects of subjective bias on the part of its users (namely the confirmation bias).[36]
Mathematics is essential to the sciences. One important function of mathematics in science is the role it plays in the expression of scientific models. Observing and collecting measurements, as well as hypothesizing and predicting, often require extensive use of mathematics. Arithmetic, algebra, geometry, trigonometry and calculus, for example, are all essential to physics. Virtually every branch of mathematics has applications in science, including "pure" areas such as number theory and topology.
+
Statistical methods, which are mathematical techniques for summarizing and analyzing data, allow scientists to assess the level of reliability and the range of variation in experimental results. Statistical analysis plays a fundamental role in many areas of both the natural sciences and social sciences.
+
Computational science applies computing power to simulate real-world situations, enabling a better understanding of scientific problems than formal mathematics alone can achieve. According to the Society for Industrial and Applied Mathematics, computation is now as important as theory and experiment in advancing scientific knowledge.[37]
+
Whether mathematics itself is properly classified as science has been a matter of some debate. Some thinkers see mathematicians as scientists, regarding physical experiments as inessential or mathematical proofs as equivalent to experiments. Others do not see mathematics as a science, since it does not require an experimental test of its theories and hypotheses. Mathematical theorems and formulas are obtained by logical derivations which presume axiomatic systems, rather than the combination of empirical observation and logical reasoning that has come to be known as the scientific method. In general, mathematics is classified as formal science, while natural and social sciences are classified as empirical sciences.[38]
The scientific community is the group of all interacting scientists. It includes many sub-communities working on particular scientific fields, and within particular institutions; interdisciplinary and cross-institutional activities are also significant.
Scientific fields are commonly divided into two major groups: natural sciences, which study natural phenomena (including biological life), and social sciences, which study human behavior and societies. These groupings are empirical sciences, which means the knowledge must be based on observable phenomena and capable of being tested for its validity by other researchers working under the same conditions.[39] There are also related disciplines that are grouped into interdisciplinary applied sciences, such as engineering and medicine. Within these categories are specialized scientific fields that can include parts of other scientific disciplines but often possess their own nomenclature and expertise.[40]
+
Mathematics, which is classified as a formal science,[41][42] has both similarities and differences with the empirical sciences (the natural and social sciences). It is similar to empirical sciences in that it involves an objective, careful and systematic study of an area of knowledge; it is different because of its method of verifying its knowledge, using a priori rather than empirical methods.[43] The formal sciences, which also include statistics and logic, are vital to the empirical sciences. Major advances in formal science have often led to major advances in the empirical sciences. The formal sciences are essential in the formation of hypotheses, theories, and laws,[44] both in discovering and describing how things work (natural sciences) and how people think and act (social sciences).
+
Apart from its broad meaning, the word "Science" sometimes may specifically refer to fundamental sciences (maths and natural sciences) alone. Science schools or faculties within many institutions are separate from those for medicine or engineering, which is an applied science.
An enormous range of scientific literature is published.[49]Scientific journals communicate and document the results of research carried out in universities and various other research institutions, serving as an archival record of science. The first scientific journals, Journal des Sçavans followed by the Philosophical Transactions, began publication in 1665. Since that time the total number of active periodicals has steadily increased. In 1981, one estimate for the number of scientific and technical journals in publication was 11,500.[50] The United States National Library of Medicine currently indexes 5,516 journals that contain articles on topics related to the life sciences. Although the journals are in 39 languages, 91 percent of the indexed articles are published in English.[51]
+
Most scientific journals cover a single scientific field and publish the research within that field; the research is normally expressed in the form of a scientific paper. Science has become so pervasive in modern societies that it is generally considered necessary to communicate the achievements, news, and ambitions of scientists to a wider populace.
+
Science magazines such as New Scientist, Science & Vie, and Scientific American cater to the needs of a much wider readership and provide a non-technical summary of popular areas of research, including notable discoveries and advances in certain fields of research. Science books engage the interest of many more people. Tangentially, the science fiction genre, primarily fantastic in nature, engages the public imagination and transmits the ideas, if not the methods, of science.
+
Recent efforts to intensify or develop links between science and non-scientific disciplines such as Literature or, more specifically, Poetry, include the Creative Writing Science resource developed through the Royal Literary Fund.[52]
Science has traditionally been a male-dominated field, with some notable exceptions.[nb 14] Women historically faced considerable discrimination in science, much as they did in other areas of male-dominated societies, such as frequently being passed over for job opportunities and denied credit for their work.[nb 15] For example, Christine Ladd (1847–1930) was able to enter a Ph.D. program as 'C. Ladd'; Christine "Kitty" Ladd completed the requirements in 1882, but was awarded her degree only in 1926, after a career which spanned the algebra of logic (see truth table), color vision, and psychology. Her work preceded notable researchers like Ludwig Wittgenstein and Charles Sanders Peirce. The achievements of women in science have been attributed to their defiance of their traditional role as laborers within the domestic sphere.[53]
+
In the late 20th century, active recruitment of women and elimination of institutional discrimination on the basis of sex greatly increased the number of female scientists, but large gender disparities remain in some fields; over half of new biologists are female, while 80% of PhDs in physics are given to men. Feminists claim this is the result of culture rather than an innate difference between the sexes, and some experiments have shown that parents challenge and explain more to boys than girls, asking them to reflect more deeply and logically.[54] In the early part of the 21st century, in America, women earned 50.3% bachelor's degrees, 45.6% master's degrees, and 40.7% of PhDs in science and engineering fields with women earning more than half of the degrees in three fields: Psychology (about 70%), Social Sciences (about 50%), and Biology (about 50-60%). However, when it comes to the Physical Sciences, Geosciences, Math, Engineering, and Computer Science; women earned less than half the degrees.[55] However, lifestyle choice also plays a major role in female engagement in science; women with young children are 28% less likely to take tenure-track positions due to work-life balance issues,[56] and female graduate students' interest in careers in research declines dramatically over the course of graduate school, whereas that of their male colleagues remains unchanged.[57]
+President Clinton meets the 1998 U.S. Nobel Prize winners in the White House
+
+
+
Science policy is an area of public policy concerned with the policies that affect the conduct of the scientific enterprise, including research funding, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care and environmental monitoring. Science policy also refers to the act of applying scientific knowledge and consensus to the development of public policies. Science policy thus deals with the entire domain of issues that involve the natural sciences. In accordance with public policy being concerned about the well-being of its citizens, science policy's goal is to consider how science and technology can best serve the public.
Science and technology research is often funded through a competitive process, in which potential research projects are evaluated and only the most promising receive funding. Such processes, which are run by government, corporations or foundations, allocate scarce funds. Total research funding in most developed countries is between 1.5% and 3% of GDP.[59] In the OECD, around two-thirds of research and development in scientific and technical fields is carried out by industry, and 20% and 10% respectively by universities and government. The government funding proportion in certain industries is higher, and it dominates research in social science and humanities. Similarly, with some exceptions (e.g. biotechnology) government provides the bulk of the funds for basic scientific research. In commercial research and development, all but the most research-oriented corporations focus more heavily on near-term commercialisation possibilities rather than "blue-sky" ideas or technologies (such as nuclear fusion).
+
Media perspectives
+
The mass media face a number of pressures that can prevent them from accurately depicting competing scientific claims in terms of their credibility within the scientific community as a whole. Determining how much weight to give different sides in a scientific debate may require considerable expertise regarding the matter.[60] Few journalists have real scientific knowledge, and even beat reporters who know a great deal about certain scientific issues may be ignorant about other scientific issues that they are suddenly asked to cover.[61][62]
Many issues damage the relationship of science to the media and the use of science and scientific arguments by politicians. As a very broad generalisation, many politicians seek certainties and facts whilst scientists typically offer probabilities and caveats. However, politicians' ability to be heard in the mass media frequently distorts the scientific understanding by the public. Examples in Britain include the controversy over the MMRinoculation, and the 1988 forced resignation of a Government Minister, Edwina Currie for revealing the high probability that battery farmed eggs were contaminated with Salmonella.[63]
+
John Horgan, Chris Mooney, and researchers from the US and Canada have described Scientific Certainty Argumentation Methods (SCAMs), where an organization or think tank makes it their only goal to cast doubt on supported science because it conflicts with political agendas.[64][65][66][67] Hank Campbell and microbiologist Alex Berezow have described "feel-good fallacies" used in politics, where politicians frame their positions in a way that makes people feel good about supporting certain policies even when scientific evidence shows there is no need to worry or there is no need for dramatic change on current programs.[68]
Working scientists usually take for granted a set of basic assumptions that are needed to justify the scientific method: (1) that there is an objective reality shared by all rational observers; (2) that this objective reality is governed by natural laws; (3) that these laws can be discovered by means of systematic observation and experimentation.[11] Philosophy of science seeks a deep understanding of what these underlying assumptions mean and whether they are valid.
+
The belief that scientific theories should and do represent metaphysical reality is known as realism. It can be contrasted with anti-realism, the view that the success of science does not depend on it being accurate about unobservable entities such as electrons. One form of anti-realism is idealism, the belief that the mind or consciousness is the most basic essence, and that each mind generates its own reality.[nb 16] In an idealistic world view, what is true for one mind need not be true for other minds.
+
+
+
+
+The Sand Reckoner is a work by Archimedes in which he sets out to determine an upper bound for the number of grains of sand that fit into the universe. In order to do this, he had to estimate the size of the universe according to the contemporary model, and invent a way to analyze extremely large numbers.
+
+
+
There are different schools of thought in philosophy of science. The most popular position is empiricism,[nb 17] which holds that knowledge is created by a process involving observation and that scientific theories are the result of generalizations from such observations.[69] Empiricism generally encompasses inductivism, a position that tries to explain the way general theories can be justified by the finite number of observations humans can make and hence the finite amount of empirical evidence available to confirm scientific theories. This is necessary because the number of predictions those theories make is infinite, which means that they cannot be known from the finite amount of evidence using deductive logic only. Many versions of empiricism exist, with the predominant ones being bayesianism[70] and the hypothetico-deductive method.[71]:p236
+
Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation.[71]:p20Critical rationalism is a contrasting 20th-century approach to science, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories and that the only way a theory can be affected by observation is when it comes in conflict with it.[71]:pp63–7 Popper proposed replacing verifiability with falsifiability as the landmark of scientific theories, and replacing induction with falsification as the empirical method.[71]:p68 Popper further claimed that there is actually only one universal method, not specific to science: the negative method of criticism, trial and error.[72] It covers all products of the human mind, including science, mathematics, philosophy, and art.[73]
+
Another approach, instrumentalism, colloquially termed "shut up and calculate", emphasizes the utility of theories as instruments for explaining and predicting phenomena.[74] It views scientific theories as black boxes with only their input (initial conditions) and output (predictions) being relevant. Consequences, theoretical entities and logical structure are claimed to be something that should simply be ignored and that scientists shouldn't make a fuss about (see interpretations of quantum mechanics). Close to instrumentalism is constructive empiricism, according to which the main criterion for the success of a scientific theory is whether what it says about observable entities is true.
Finally, another approach often cited in debates of scientific skepticism against controversial movements like "scientific creationism", is methodological naturalism. Its main point is that a difference between natural and supernatural explanations should be made, and that science should be restricted methodologically to natural explanations.[nb 18] That the restriction is merely methodological (rather than ontological) means that science should not consider supernatural explanations itself, but should not claim them to be wrong either. Instead, supernatural explanations should be left a matter of personal belief outside the scope of science. Methodological naturalism maintains that proper science requires strict adherence to empirical study and independent verification as a process for properly developing and evaluating explanations for observable phenomena.[77] The absence of these standards, arguments from authority, biased observational studies and other common fallacies are frequently cited by supporters of methodological naturalism as characteristic of the non-science they criticize.
A scientific theory is empirical,[nb 17][78] and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism.[nb 19] The philosopher of science Karl Popper sharply distinguishes truth from certainty. He writes that scientific knowledge "consists in the search for truth", but it "is not the search for certainty ... All human knowledge is fallible and therefore uncertain."[79]:p4
+
New scientific knowledge rarely results in vast changes in our understanding. According to psychologist Keith Stanovich, it may be the media's overuse of words like "breakthrough" that leads the public to imagine that science is constantly proving everything it thought was true to be false.[80]:119–138 While there are such famous cases as the theory of relativity that required a complete reconceptualization, these are extreme exceptions. Knowledge in science is gained by a gradual synthesis of information from different experiments, by various researchers, across different branches of science; it is more like a climb than a leap.[80]:123 Theories vary in the extent to which they have been tested and verified, as well as their acceptance in the scientific community.[nb 20] For example, heliocentric theory, the theory of evolution, relativity theory, and germ theory still bear the name "theory" even though, in practice, they are considered factual.[81] Philosopher Barry Stroud adds that, although the best definition for "knowledge" is contested, being skeptical and entertaining the possibility that one is incorrect is compatible with being correct. Ironically then, the scientist adhering to proper scientific approaches will doubt themselves even once they possess the truth.[82] The fallibilistC. S. Peirce argued that inquiry is the struggle to resolve actual doubt and that merely quarrelsome, verbal, or hyperbolic doubt is fruitless[83]—but also that the inquirer should try to attain genuine doubt rather than resting uncritically on common sense.[84] He held that the successful sciences trust, not to any single chain of inference (no stronger than its weakest link), but to the cable of multiple and various arguments intimately connected.[85]
+
Stanovich also asserts that science avoids searching for a "magic bullet"; it avoids the single-cause fallacy. This means a scientist would not ask merely "What is the cause of ...", but rather "What are the most significant causes of ...". This is especially the case in the more macroscopic fields of science (e.g. psychology, cosmology).[80]:141–147 Of course, research often analyzes few factors at once, but these are always added to the long list of factors that are most important to consider.[80]:141–147 For example: knowing the details of only a person's genetics, or their history and upbringing, or the current situation may not explain a behaviour, but a deep understanding of all these variables combined can be very predictive.
+
Fringe science, pseudoscience and junk science
+
An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or junk science.[nb 21] Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of science but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated.[86] Various types of commercial advertising, ranging from hype to fraud, may fall into these categories.
+
There also can be[discuss] an element of political or ideological bias on all sides of scientific debates.[citation needed] Sometimes, research may be characterized as "bad science", research that may be well-intentioned but is actually incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. The term "scientific misconduct" refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person.[87]
Although encyclopedias such as Pliny (fl. 77 AD) Natural History offered purported fact, they proved unreliable. A skeptical point of view, demanding a method of proof, was the practical position taken to deal with unreliable knowledge. As early as 1000 years ago, scholars such as Alhazen (Doubts Concerning Ptolemy), Roger Bacon, Witelo, John Pecham, Francis Bacon (1605), and C. S. Peirce (1839–1914) provided the community to address these points of uncertainty. In particular, fallacious reasoning can be exposed, such as 'affirming the consequent'.
+
+
"If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties." —Francis Bacon (1605) The Advancement of Learning, Book 1, v, 8
+
+
The methods of inquiry into a problem have been known for thousands of years,[88] and extend beyond theory to practice. The use of measurements, for example, is a practical approach to settle disputes in the community.
+
John Ziman points out that intersubjective pattern recognition is fundamental to the creation of all scientific knowledge.[89]:p44 Ziman shows how scientists can identify patterns to each other across centuries: Ziman refers to this ability as 'perceptual consensibility'.[90]:p46 Ziman then makes consensibility, leading to consensus, the touchstone of reliable knowledge.[90]:p104
+
+
Basic and applied research
+
+
+
+
+Anthropogenic pollution has an effect on the Earth's environment and climate
+
+
+
Although some scientific research is applied research into specific problems, a great deal of our understanding comes from the curiosity-driven undertaking of basic research. This leads to options for technological advance that were not planned or sometimes even imaginable. This point was made by Michael Faraday when, allegedly in response to the question "what is the use of basic research?" he responded "Sir, what is the use of a new-born child?".[91] For example, research into the effects of red light on the human eye's rod cells did not seem to have any practical purpose; eventually, the discovery that our night vision is not troubled by red light would lead search and rescue teams (among others) to adopt red light in the cockpits of jets and helicopters.[80]:106–110 In a nutshell: Basic research is the search for knowledge. Applied research is the search for solutions to practical problems using this knowledge. Finally, even basic research can take unexpected turns, and there is some sense in which the scientific method is built to harness luck.
+
Research in practice
+
Due to the increasing complexity of information and specialization of scientists, most of the cutting-edge research today is done by well funded groups of scientists, rather than individuals.[92] D.K. Simonton notes that due to the breadth of very precise and far reaching tools already used by researchers today and the amount of research generated so far, creation of new disciplines or revolutions within a discipline may no longer be possible as it is unlikely that some phenomenon that merits its own discipline has been overlooked. Hybridizing of disciplines and finessing knowledge is, in his view, the future of science.[92]
+
Practical impacts of scientific research
+
Discoveries in fundamental science can be world-changing. For example:
Hygiene, leading to decreased transmission of infectious diseases; antibodies, leading to techniques for disease diagnosis and targeted anticancer therapies.
^"... modern science is a discovery as well as an invention. It was a discovery that nature generally acts regularly enough to be described by laws and even by mathematics; and required invention to devise the techniques, abstractions, apparatus, and organization for exhibiting the regularities and securing their law-like descriptions."— Heilbron 2003, p. vii
+
+
"science". Merriam-Webster Online Dictionary. Merriam-Webster, Inc. Retrieved October 16, 2011. 3 a: knowledge or a system of knowledge covering general truths or the operation of general laws especially as obtained and tested through scientific method b: such knowledge or such a system of knowledge concerned with the physical world and its phenomena.
^"The historian ... requires a very broad definition of "science" — one that ... will help us to understand the modern scientific enterprise. We need to be broad and inclusive, rather than narrow and exclusive ... and we should expect that the farther back we go [in time] the broader we will need to be." — David Pingree (1992), "Hellenophilia versus the History of Science" Isis83 554–63, as cited on p.3, David C. Lindberg (2007), The beginnings of Western science: the European Scientific tradition in philosophical, religious, and institutional context, Second ed. Chicago: Univ. of Chicago Press ISBN 978-0-226-48205-7
+
^"... [A] man knows a thing scientifically when he possesses a conviction arrived at in a certain way, and when the first principles on which that conviction rests are known to him with certainty—for unless he is more certain of his first principles than of the conclusion drawn from them he will only possess the knowledge in question accidentally." — Aristotle, Nicomachean Ethics6 (H. Rackham, ed.) Aristot. Nic. Eth. 1139b
+
^Tracey Tokuhama-Espinosa (2010). Mind, Brain, and Education Science: A Comprehensive Guide to the New Brain-Based Teaching. W. W. Norton & Company. p. 39. ISBN978-0-393-70607-9. Alhazen (or Al-Haytham; 965–1039 C.E.) was perhaps one of the greatest physicists of all times and a product of the Islamic Golden Age or Islamic Renaissance (7th–13th centuries). He made significant contributions to anatomy, astronomy, engineering, mathematics, medicine, ophthalmology, philosophy, physics, psychology, and visual perception and is primarily attributed as the inventor of the scientific method, for which author Bradley Steffens (2006) describes him as the "first scientist".
+
^Alhacen had access to the optics books of Euclid and Ptolemy, as is shown by the title of his lost work A Book in which I have Summarized the Science of Optics from the Two Books of Euclid and Ptolemy, to which I have added the Notions of the First Discourse which is Missing from Ptolemy's Book From Ibn Abi Usaibia's catalog, as cited in (Smith 2001):91(vol.1),p.xv
+
^"[Ibn al-Haytham] followed Ptolemy's bridge building ... into a grand synthesis of light and vision. Part of his effort consisted in devising ranges of experiments, of a kind probed before but now undertaken on larger scale."— Cohen 2010, p. 59
+
^The translator, Gerard of Cremona (c. 1114–87), inspired by his love of the Almagest, came to Toledo, where he knew he could find the Almagest in Arabic. There he found Arabic books of every description, and learned Arabic in order to translate these books into Latin, being aware of 'the poverty of the Latins'. —As cited by Charles Burnett (2001) "The Coherence of the Arabic-Latin Translation Program in Toledo in the Twelfth Century", pp. 250, 255, & 257, Science in Context14(1/2), 249–288 (2001). DOI: 10.1017/0269889701000096
+
^Kepler, Johannes (1604) Ad Vitellionem paralipomena, quibus astronomiae pars opticae traditur (Supplements to Witelo, in which the optical part of astronomy is treated) as cited in Smith, A. Mark (2004) "What is the history of Medieval Optics Really About?" Proceedings of the American Philosophical Society148(2 — Jun. 2004), pp. 180-194 p.192 via JSTOR
+
+
The full title translation is from p.60 of James R. Voelkel (2001) Johannes Kepler and the New Astronomy Oxford University Press. Kepler was driven to this experiment after observing the partial solar eclipse at Graz, July 10, 1600. He used Tycho Brahe's method of observation, which was to project the image of the sun on a piece of paper through a pinhole aperture, instead of looking directly at the sun. He disagreed with Brahe's conclusion that total eclipses of the sun were impossible, because there were historical accounts of total eclipses. Instead he deduced that the size of the aperture controls the sharpness of the projected image (the larger the aperture, the more accurate the image — this fact is now fundamental for optical system design). Voelkel, p.61, notes that Kepler's experiments produced the first correct account of vision and the eye, because he realized he could not accurately write about astronomical observation by ignoring the eye.
+
+
+
^di Francia 1976, p. 13: "The amazing point is that for the first time since the discovery of mathematics, a method has been introduced, the results of which have an intersubjective value!" (Author's punctuation)
+
^di Francia 1976, pp. 4–5: "One learns in a laboratory; one learns how to make experiments only by experimenting, and one learns how to work with his hands only by using them. The first and fundamental form of experimentation in physics is to teach young people to work with their hands. Then they should be taken into a laboratory and taught to work with measuring instruments — each student carrying out real experiments in physics. This form of teaching is indispensable and cannot be read in a book."
+
^Fara 2009, p. 204: "Whatever their discipline, scientists claimed to share a common scientific method that ... distinguished them from non-scientists."
Henrietta Leavitt, a professional human computer and astronomer, who first published the significant relationship between the luminosity of Cepheid variable stars and their distance from Earth. This allowed Hubble to make the discovery of the expanding universe, which led to the Big Bang theory.
^Nina Byers,Contributions of 20th Century Women to Physics which provides details on 83 female physicists of the 20th century. By 1976, more women were physicists, and the 83 who were detailed were joined by other women in noticeably larger numbers.
+
^This realization is the topic of intersubjective verifiability, as recounted, for example, by Max Born (1949, 1965) Natural Philosophy of Cause and Chance, who points out that all knowledge, including natural or social science, is also subjective. p. 162: "Thus it dawned upon me that fundamentally everything is subjective, everything without exception. That was a shock."
+
^ abIn his investigation of the law of falling bodies, Galileo (1638) serves as example for scientific investigation: Two New Sciences "A piece of wooden moulding or scantling, about 12 cubits long, half a cubit wide, and three finger-breadths thick, was taken; on its edge was cut a channel a little more than one finger in breadth; having made this groove very straight, smooth, and polished, and having lined it with parchment, also as smooth and polished as possible, we rolled along it a hard, smooth, and very round bronze ball. Having placed this board in a sloping position, by lifting one end some one or two cubits above the other, we rolled the ball, as I was just saying, along the channel, noting, in a manner presently to be described, the time required to make the descent. We . . . now rolled the ball only one-quarter the length of the channel; and having measured the time of its descent, we found it precisely one-half of the former. Next we tried other distances, comparing the time for the whole length with that for the half, or with that for two-thirds, or three-fourths, or indeed for any fraction; in such experiments, repeated many, many, times." Galileo solved the problem of time measurement by weighing a jet of water collected during the descent of the bronze ball, as stated in his Two New Sciences.
+
^Godfrey-Smith 2003, p. 151 credits Willard Van Orman Quine (1969) "Epistemology Naturalized" Ontological Relativity and Other Essays New York: Columbia University Press, as well as John Dewey, with the basic ideas of naturalism — Naturalized Epistemology, but Godfrey-Smith diverges from Quine's position: according to Godfrey-Smith, "A naturalist can think that science can contribute to answers to philosophical questions, without thinking that philosophical questions can be replaced by science questions.".
+
^"No amount of experimentation can ever prove me right; a single experiment can prove me wrong." —Albert Einstein, noted by Alice Calaprice (ed. 2005) The New Quotable Einstein Princeton University Press and Hebrew University of Jerusalem, ISBN 0-691-12074-9 p. 291. Calaprice denotes this not as an exact quotation, but as a paraphrase of a translation of A. Einstein's "Induction and Deduction". Collected Papers of Albert Einstein7 Document 28. Volume 7 is The Berlin Years: Writings, 1918–1921. A. Einstein; M. Janssen, R. Schulmann, et al., eds.
+
^Fleck, Ludwik (1979). Trenn, Thaddeus J.; Merton, Robert K, eds. Genesis and Development of a Scientific Fact. Chicago: University of Chicago Press. ISBN0-226-25325-2. Claims that before a specific fact "existed", it had to be created as part of a social agreement within a community. Steven Shapin (1980) "A view of scientific thought" Science ccvii (Mar 7, 1980) 1065–66 states "[To Fleck,] facts are invented, not discovered. Moreover, the appearance of scientific facts as discovered things is itself a social construction: a made thing. "
+
^"Pseudoscientific – pretending to be scientific, falsely represented as being scientific", from the Oxford American Dictionary, published by the Oxford English Dictionary; Hansson, Sven Ove (1996)."Defining Pseudoscience", Philosophia Naturalis, 33: 169–176, as cited in "Science and Pseudo-science" (2008) in Stanford Encyclopedia of Philosophy. The Stanford article states: "Many writers on pseudoscience have emphasized that pseudoscience is non-science posing as science. The foremost modern classic on the subject (Gardner 1957) bears the title Fads and Fallacies in the Name of Science. According to Brian Baigrie (1988, 438), "[w]hat is objectionable about these beliefs is that they masquerade as genuinely scientific ones." These and many other authors assume that to be pseudoscientific, an activity or a teaching has to satisfy the following two criteria (Hansson 1996): (1) it is not scientific, and (2) its major proponents try to create the impression that it is scientific".
+
+
For example, Hewitt et al. Conceptual Physical Science Addison Wesley; 3 edition (July 18, 2003) ISBN 0-321-05173-4, Bennett et al. The Cosmic Perspective 3e Addison Wesley; 3 edition (July 25, 2003) ISBN 0-8053-8738-2; See also, e.g., Gauch HG Jr. Scientific Method in Practice (2003).
+
A 2006 National Science Foundation report on Science and engineering indicators quoted Michael Shermer's (1997) definition of pseudoscience: '"claims presented so that they appear [to be] scientific even though they lack supporting evidence and plausibility"(p. 33). In contrast, science is "a set of methods designed to describe and interpret observed and inferred phenomena, past or present, and aimed at building a testable body of knowledge open to rejection or confirmation"(p. 17)'.Shermer M. (1997). Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time. New York: W. H. Freeman and Company. ISBN0-7167-3090-1. as cited by National Science Board. National Science Foundation, Division of Science Resources Statistics (2006). "Science and Technology: Public Attitudes and Understanding". Science and engineering indicators 2006.
+
"A pretended or spurious science; a collection of related beliefs about the world mistakenly regarded as being based on scientific method or as having the status that scientific truths now have," from the Oxford English Dictionary, second edition 1989.
+
+
+
^ abEvicting Einstein, March 26, 2004, NASA. "Both [relativity and quantum mechanics] are extremely successful. The Global Positioning System (GPS), for instance, wouldn't be possible without the theory of relativity. Computers, telecommunications, and the Internet, meanwhile, are spin-offs of quantum mechanics."
^Haq, Syed (2009). "Science in Islam". Oxford Dictionary of the Middle Ages. ISSN 1703-7603. Retrieved 2014-10-22.
+
^Sabra, A. I. (1989). The Optics of Ibn al-Haytham. Books I–II–III: On Direct Vision. London: The Warburg Institute, University of London. pp. 25–29. ISBN 0-85481-072-2.
^David C. Lindberg (2007), The beginnings of Western science: the European Scientific tradition in philosophical, religious, and institutional context, Second ed. Chicago: Univ. of Chicago Press ISBN 978-0-226-48205-7
+
^Cahan, David, ed. (2003). From Natural Philosophy to the Sciences: Writing the History of Nineteenth-Century Science. Chicago: University of Chicago Press. ISBN0-226-08928-2.
+
^The Oxford English Dictionary dates the origin of the word "scientist" to 1834.
^"Progress or Return" in An Introduction to Political Philosophy: Ten Essays by Leo Strauss (Expanded version of Political Philosophy: Six Essays by Leo Strauss, 1975.) Ed. Hilail Gilden. Detroit: Wayne State UP, 1989.
+
^Strauss and Cropsey eds. History of Political Philosophy, Third edition, p.209.
+
^Nikoletseas, Michael M. (2014). Parmenides: The World as Modus Cogitandi. ISBN 978-1-4922-8358-4"
^* Smith, A. Mark (June 2004), "What is the History of Medieval Optics Really About?", Proceedings of the American Philosophical Society148 (2): 180–194, JSTOR1558283:p.189
^ abGrant, Edward (2007). A History of Natural Philosophy: From the Ancient World to the Nineteenth Century. Cambridge University Press. pp. 62–67. ISBN978-0-521-68957-1.
^Smith, A. Mark (1981), "Getting the Big Picture in Perspectivist Optics" Isis72(#4 — Dec. 1981), pp. 568-589 p.588 via JSTOR
+
^Cohen, H. Floris (2010). How modern science came into the world. Four civilizations, one 17th-century breakthrough. (Second ed.). Amsterdam: Amsterdam University Press. ISBN9789089642394.
+
^"Galileo and the Birth of Modern Science, by Stephen Hawking, American Heritage's Invention & Technology, Spring 2009, Vol. 24, No. 1, p. 36
^Krimsky, Sheldon (2003). Science in the Private Interest: Has the Lure of Profits Corrupted the Virtue of Biomedical Research. Rowman & Littlefield. ISBN0-7425-1479-X. OCLC185926306.
+
^Bulger, Ruth Ellen; Heitman, Elizabeth; Reiser, Stanley Joel (2002). The Ethical Dimensions of the Biological and Health Sciences (2nd ed.). Cambridge University Press. ISBN0-521-00886-7. OCLC47791316.
^Bonnie Spanier, From Molecules to Brains, Normal Science Supports Sexist Beliefs About Differences, The Gender and Science Reader ( New York: Routledge 2001)
+
^Crowley, K. Callanan, M.A., Tenenbaum, H. R., & Allen, E. (2001). Parents explain more often to boys than to girls during shared scientific thinking. Psychological Science, 258–261.
+
^Rosser, Sue V. Breaking into the Lab : Engineering Progress for Women in Science. New York: New York University Press. p. 7. ISBN978-0-8147-7645-2.
+
^Goulden et al. 2009. Center for American Progress
+
^Royal Society of Chemistry. 2009. Change of Heart;
^"Original "Doubt is our product ..." memo". University of California, San Francisco. August 21, 1969. Retrieved October 3, 2012. The memo reads "Doubt is our product since it is the best means of competing with the 'body of fact' that exists in the mind of the general public. It is also the means of establishing a controversy."
^Hank Campbell, Alex Berezow,. Science Left Behind : Feel-good Fallacies and the Rise of the Anti-Scientific Left (1st ed.). New York: PublicAffairs. ISBN978-1-61039-164-1.
+
^"... [T]he logical empiricists thought that the great aim of science was to discover and establish generalizations." —Godfrey-Smith 2003, p. 41
+
^"Bayesianism tries to understand evidence using probability theory." —Godfrey-Smith 2003, p. 203
^Brugger, E. Christian (2004). "Casebeer, William D. Natural Ethical Facts: Evolution, Connectionism, and Moral Cognition". The Review of Metaphysics58 (2).
^Peirce (1877), "The Fixation of Belief", Popular Science Monthly, v. 12, pp. 1–15, see §IV on p. 6–7. Reprinted Collected Papers v. 5, paragraphs 358–87 (see 374–6), Writings v. 3, pp. 242–57 (see 247–8), Essential Peirce v. 1, pp. 109–23 (see 114–15), and elsewhere.
+
^Peirce (1905), "Issues of Pragmaticism", The Monist, v. XV, n. 4, pp. 481–99, see "Character V" on p. 491. Reprinted in Collected Papers v. 5, paragraphs 438–63 (see 451), Essential Peirce v. 2, pp. 346–59 (see 353), and elsewhere.
+
^Peirce (1868), "Some Consequences of Four Incapacities", Journal of Speculative Philosophy v. 2, n. 3, pp. 140–57, see p. 141. Reprinted in Collected Papers, v. 5, paragraphs 264–317, Writings v. 2, pp. 211–42, Essential Peirce v. 1, pp. 28–55, and elsewhere.
^"Coping with fraud"(PDF). The COPE Report 1999: 11–18. Archived from the original(PDF) on September 28, 2007. Retrieved July 21, 2011. It is 10 years, to the month, since Stephen Lock ... Reproduced with kind permission of the Editor, The Lancet.
+
^In mathematics, Plato's Meno demonstrates that it is possible to know logical propositions, such as the Pythagorean theorem, and even to prove them, as cited by Crease 2009, pp. 35–41
di Francia, Giuliano Toraldo (1976). The Investigation of the Physical World. Originally published in Italian as L'Indagine del Mondo Fisico by Giulio Einaudi editore 1976; first published in English by Cambridge University Press 1981. Cambridge: Cambridge University Press. ISBN0-521-29925-X.
+
Fara, Patricia (2009). Science : a four thousand year history. Oxford: Oxford University Press. p. 408. ISBN978-0-19-922689-4.
Parkin, D. (1991). "Simultaneity and Sequencing in the Oracular Speech of Kenyan Diviners". In Philip M. Peek. African Divination Systems: Ways of Knowing. Indianapolis, IN: Indiana University Press..
Stanovich, Keith E. (2007). How to Think Straight About Psychology. Boston: Pearson Education. ISBN978-0-205-68590-5.
+
Ziman, John (1978). Reliable knowledge: An exploration of the grounds for belief in science. Cambridge: Cambridge University Press. p. 197. ISBN0-521-22087-4
+
+
+
Further reading
+
+
+
Augros, Robert M., Stanciu, George N., The New Story of Science: mind and the universe, Lake Bluff, Ill.: Regnery Gateway, c1984. ISBN 0-89526-833-7
+
Becker, Ernest (1968). The structure of evil; an essay on the unification of the science of man. New York: G. Braziller.
+
Cole, K. C., Things your teacher never told you about science: Nine shocking revelationsNewsday, Long Island, New York, March 23, 1986, pg 21+
+
Crease, Robert P. (2011). World in the Balance: the historic quest for an absolute system of measurement. New York: W.W. Norton. p. 317. ISBN978-0-393-07298-3.
+
Feyerabend, Paul (2005). Science, history of the philosophy, as cited in Honderich, Ted (2005). The Oxford companion to philosophy. Oxford Oxfordshire: Oxford University Press. ISBN0-19-926479-1. OCLC173262485.
+
Feynman, Richard P. (1999). Robbins, Jeffrey, ed. The pleasure of finding things out the best short works of Richard P. Feynman. Cambridge, Mass.: Perseus Books. ISBN0465013120.
+
Feynman, R.P. (1999). The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman. Perseus Books Group. ISBN0-465-02395-9. OCLC181597764.
Gaukroger, Stephen (2006). The Emergence of a Scientific Culture: Science and the Shaping of Modernity 1210–1685. Oxford: Oxford University Press. ISBN0-19-929644-8.
Thurs, Daniel Patrick (2007). Science Talk: Changing Notions of Science in American Popular Culture. New Brunswick, NJ: Rutgers University Press. pp. 22–52. ISBN978-0-8135-4073-3.
Classification of the Sciences in Dictionary of the History of Ideas. (Dictionary's new electronic format is badly botched, entries after "Design" are inaccessible. Internet Archiveold version).
+
+
+
+
diff --git a/EclipseChapterProjects/Ch14/src/com/allendowney/thinkdast/Index.java b/EclipseChapterProjects/Ch14/src/com/allendowney/thinkdast/Index.java
new file mode 100644
index 00000000..3d5c8d5d
--- /dev/null
+++ b/EclipseChapterProjects/Ch14/src/com/allendowney/thinkdast/Index.java
@@ -0,0 +1,111 @@
+package com.allendowney.thinkdast;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+import java.util.HashSet;
+
+import org.jsoup.select.Elements;
+
+/**
+ * Encapsulates a map from search term to set of TermCounter.
+ *
+ * @author downey
+ *
+ */
+public class Index {
+
+ private Map> index = new HashMap>();
+
+ /**
+ * Adds a TermCounter to the set associated with `term`.
+ *
+ * @param term
+ * @param tc
+ */
+ public void add(String term, TermCounter tc) {
+ Set set = get(term);
+
+ // if we're seeing a term for the first time, make a new Set
+ if (set == null) {
+ set = new HashSet();
+ index.put(term, set);
+ }
+ // otherwise we can modify an existing Set
+ set.add(tc);
+ }
+
+ /**
+ * Looks up a search term and returns a set of TermCounters.
+ *
+ * @param term
+ * @return
+ */
+ public Set get(String term) {
+ return index.get(term);
+ }
+
+ /**
+ * Prints the contents of the index.
+ */
+ public void printIndex() {
+ // loop through the search terms
+ for (String term: keySet()) {
+ System.out.println(term);
+
+ // for each term, print the pages where it appears
+ Set tcs = get(term);
+ for (TermCounter tc: tcs) {
+ Integer count = tc.get(term);
+ System.out.println(" " + tc.getLabel() + " " + count);
+ }
+ }
+ }
+
+ /**
+ * Returns the set of terms that have been indexed.
+ *
+ * @return
+ */
+ public Set keySet() {
+ return index.keySet();
+ }
+
+ /**
+ * Add a page to the index.
+ *
+ * @param url URL of the page.
+ * @param paragraphs Collection of elements that should be indexed.
+ */
+ public void indexPage(String url, Elements paragraphs) {
+ // make a TermCounter and count the terms in the paragraphs
+ TermCounter tc = new TermCounter(url);
+ tc.processElements(paragraphs);
+
+ // for each term in the TermCounter, add the TermCounter to the index
+ for (String term: tc.keySet()) {
+ add(term, tc);
+ }
+ }
+
+ /**
+ * @param args
+ * @throws IOException
+ */
+ public static void main(String[] args) throws IOException {
+
+ WikiFetcher wf = new WikiFetcher();
+ Index indexer = new Index();
+
+ String url = "https://en.wikipedia.org/wiki/Java_(programming_language)";
+ Elements paragraphs = wf.fetchWikipedia(url);
+ indexer.indexPage(url, paragraphs);
+
+ url = "https://en.wikipedia.org/wiki/Programming_language";
+ paragraphs = wf.fetchWikipedia(url);
+ indexer.indexPage(url, paragraphs);
+
+ indexer.printIndex();
+ }
+}
diff --git a/EclipseChapterProjects/Ch14/src/com/allendowney/thinkdast/JedisIndex.java b/EclipseChapterProjects/Ch14/src/com/allendowney/thinkdast/JedisIndex.java
new file mode 100644
index 00000000..ca96b30a
--- /dev/null
+++ b/EclipseChapterProjects/Ch14/src/com/allendowney/thinkdast/JedisIndex.java
@@ -0,0 +1,333 @@
+package com.allendowney.thinkdast;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+
+import org.jsoup.select.Elements;
+
+import redis.clients.jedis.Jedis;
+import redis.clients.jedis.Transaction;
+
+/**
+ * Represents a Redis-backed web search index.
+ *
+ */
+public class JedisIndex {
+
+ private Jedis jedis;
+
+ /**
+ * Constructor.
+ *
+ * @param jedis
+ */
+ public JedisIndex(Jedis jedis) {
+ this.jedis = jedis;
+ }
+
+ /**
+ * Returns the Redis key for a given search term.
+ *
+ * @return Redis key.
+ */
+ private String urlSetKey(String term) {
+ return "URLSet:" + term;
+ }
+
+ /**
+ * Returns the Redis key for a URL's TermCounter.
+ *
+ * @return Redis key.
+ */
+ private String termCounterKey(String url) {
+ return "TermCounter:" + url;
+ }
+
+ /**
+ * Checks whether we have a TermCounter for a given URL.
+ *
+ * @param url
+ * @return
+ */
+ public boolean isIndexed(String url) {
+ String redisKey = termCounterKey(url);
+ return jedis.exists(redisKey);
+ }
+
+ /**
+ * Adds a URL to the set associated with `term`.
+ *
+ * @param term
+ * @param tc
+ */
+ public void add(String term, TermCounter tc) {
+ jedis.sadd(urlSetKey(term), tc.getLabel());
+ }
+
+ /**
+ * Looks up a search term and returns a set of URLs.
+ *
+ * @param term
+ * @return Set of URLs.
+ */
+ public Set getURLs(String term) {
+ Set set = jedis.smembers(urlSetKey(term));
+ return set;
+ }
+
+ /**
+ * Looks up a term and returns a map from URL to count.
+ *
+ * @param term
+ * @return Map from URL to count.
+ */
+ public Map getCounts(String term) {
+ Map map = new HashMap();
+ Set urls = getURLs(term);
+ for (String url: urls) {
+ Integer count = getCount(url, term);
+ map.put(url, count);
+ }
+ return map;
+ }
+
+ /**
+ * Looks up a term and returns a map from URL to count.
+ *
+ * @param term
+ * @return Map from URL to count.
+ */
+ public Map getCountsFaster(String term) {
+ // convert the set of strings to a list so we get the
+ // same traversal order every time
+ List urls = new ArrayList();
+ urls.addAll(getURLs(term));
+
+ // construct a transaction to perform all lookups
+ Transaction t = jedis.multi();
+ for (String url: urls) {
+ String redisKey = termCounterKey(url);
+ t.hget(redisKey, term);
+ }
+ List