CiteSeerX

CiteSeerx (originally called CiteSeer) is a public search engine and digital library for scientific and academic papers, primarily in the fields of computer and information science. CiteSeer holds a United States patent # 6289342, titled "Autonomous citation indexing and literature browsing using citation context," granted on September 11, 2001. Stephen R. Lawrence, C. Lee Giles, Kurt D. Bollacker are the inventors of this patent assigned to NEC Laboratories America, Inc. This patent was filed on May 20, 1998, which has its roots (Priority) to January 5, 1998. A continuation patent was also granted to the same inventors and also assigned to NEC Labs on this invention i.e. US Patent # 6738780 granted on May 18, 2004 and was filed on May 16, 2001. CiteSeer is considered as a predecessor of academic search tools such as Google Scholar and Microsoft Academic Search. CiteSeer-like engines and archives usually only harvest documents from publicly available websites and do not crawl publisher websites. For this reason, authors whose documents are freely available are more likely to be represented in the index.

CiteSeer's goal is to improve the dissemination and access of academic and scientific literature. As a non-profit service that can be freely used by anyone, it has been considered as part of the open access movement that is attempting to change academic and scientific publishing to allow greater access to scientific literature. CiteSeer freely provided Open Archives Initiative metadata of all indexed documents and links indexed documents when possible to other sources of metadata such as DBLP and the ACM Portal. To promote open data, CiteSeerx shares its data for non-commercial purposes under a Creative Commons license.[1]

The name can be construed to have at least two explanations. As a pun, a 'sightseer' is a tourist who looks at the sights, so a 'cite seer' would be a researcher who looks at cited papers. Another is a 'seer' is a prophet and a 'cite seer' is a prophet of citations. CiteSeer changed its name to ResearchIndex at one point and then changed it back.

CiteSeerx
Type of site
Bibliographic database
OwnerPennsylvania State University College of Information Sciences and Technology
Websiteciteseerx.ist.psu.edu
RegistrationOptional
Launched2007
Current statusActive
Content license
Creative Commons BY-NC-SA license[1]

History

CiteSeer and CiteSeer.IST

CiteSeer was created by researchers Lee Giles, Kurt Bollacker and Steve Lawrence in 1997 while they were at the NEC Research Institute (now NEC Labs), Princeton, New Jersey, USA. CiteSeer's goal was to actively crawl and harvest academic and scientific documents on the web and use autonomous citation indexing to permit querying by citation or by document, ranking them by citation impact. At one point, it was called ResearchIndex.

CiteSeer became public in 1998 and had many new features unavailable in academic search engines at that time. These included:

  • Autonomous Citation Indexing automatically created a citation index that can be used for literature search and evaluation.
  • Citation statistics and related documents were computed for all articles cited in the database, not just the indexed articles.
  • Reference linking allowing browsing of the database using citation links.
  • Citation context showed the context of citations to a given paper, allowing a researcher to quickly and easily see what other researchers have to say about an article of interest.
  • Related documents were shown using citation and word based measures and an active and continuously updated bibliography is shown for each document.

After NEC, in 2004 it was hosted as CiteSeer.IST on the World Wide Web at the College of Information Sciences and Technology, The Pennsylvania State University, and had over 700,000 documents. For enhanced access, performance and research, similar versions of CiteSeer were supported at universities such as the Massachusetts Institute of Technology, University of Zürich and the National University of Singapore. However, these versions of CiteSeer proved difficult to maintain and are no longer available. Because CiteSeer only indexes freely available papers on the web and does not have access to publisher metadata, it returns fewer citation counts than sites, such as Google Scholar, that have publisher metadata.

CiteSeer had not been comprehensively updated since 2005 due to limitations in its architecture design. It had a representative sampling of research documents in computer and information science but was limited in coverage because it was limited to papers that are publicly available, usually at an author's homepage, or those submitted by an author. To overcome some of these limitations, a modular and open source architecture for CiteSeer was designed – CiteSeerx.

CiteSeerx

CiteSeerx replaced CiteSeer and all queries to CiteSeer were redirected. CiteSeerx[2] is a public search engine and digital library and repository for scientific and academic papers primarily with a focus on computer and information science.[2] However, recently CiteSeerx has been expanding into other scholarly domains such as economics, physics and others. Released in 2008, it was loosely based on the previous CiteSeer search engine and digital library and is built with a new open source infrastructure, SeerSuite, and new algorithms and their implementations. It was developed by researchers Dr. Isaac Councill and Dr. C. Lee Giles at the College of Information Sciences and Technology, Pennsylvania State University. It continues to support the goals outlined by CiteSeer to actively crawl and harvest academic and scientific documents on the public web and to use a citation inquery by citations and ranking of documents by the impact of citations. Currently, Lee Giles, Prasenjit Mitra, Susan Gauch, Min-Yen Kan, Pradeep Teregowda, Juan Pablo Fernández Ramírez, Pucktada Treeratpituk, Jian Wu, Douglas Jordan, Steve Carman, Jack Carroll, Jim Jansen, and Shuyi Zheng are or have been actively involved in its development. Recently, a table search feature was introduced.[3] It has been funded by the National Science Foundation, NASA, and Microsoft Research.

CiteSeerx continues to be rated as one of the world's top repositories and was rated number 1 in July 2010.[4] It currently has over 6 million documents with nearly 6 million unique authors and 120 million citations.

CiteSeerx also shares its software, data, databases and metadata with other researchers, currently by Amazon S3 and by rsync.[5] Its new modular open source architecture and software (available previously on SourceForge but now on GitHub) is built on Apache Solr and other Apache and open source tools which allows it to be a testbed for new algorithms in document harvesting, ranking, indexing, and information extraction.

CiteSeerx caches some PDF files that it has scanned. As such, each page include a DMCA link which can be used to report copyright violations.[6]

Current features

Automated information extraction

CiteSeerx uses automated information extraction tools, usually built on machine learning methods such ParsCit, to extract scholarly document metadata such as title, authors, abstract, citations, etc. As such, there are sometime errors in authors and titles. Other academic search engines have similar errors.

Focused crawling

CiteSeerx crawls publicly available scholarly documents primarily from author webpages and other open resources, and does not have access to publisher metadata. As such citation counts in CiteSeerx are usually less than those in Google Scholar and Microsoft Academic Search who have access to publisher metadata.

Usage

CiteSeerx has nearly 1 million users worldwide based on unique IP addresses and has millions of hits daily. Annual downloads of document PDFs was nearly 200 million for 2015.

Data

CiteSeerx data is regularly shared under a Creative Commons BY-NC-SA license with researchers worldwide and has been and is used in many experiments and competitions.

Other SeerSuite-based search engines

The CiteSeer model had been extended to cover academic documents in business with SmealSearch and in e-business with eBizSearch. However, these were not maintained by their sponsors. An older version of both of these could be once found at BizSeer.IST but is no longer in service.

Other Seer-like search and repository systems have been built for chemistry, ChemXSeer and for archaeology, ArchSeer. Another had been built for robots.txt file search, BotSeer. All of these are built on the open source tool SeerSuite, which uses the open source indexer Lucene.

See also

References

  1. ^ a b "CiteSeerX Data Policy". Retrieved 2015-11-10.
  2. ^ a b "About CiteSeerX". Retrieved 2010-05-07.
  3. ^ "The CiteSeerX Team". Pennsylvania State University. Retrieved 2018-05-01.
  4. ^ "Ranking Web of World Repositories: Top 800 Repositories". Cybermetrics Lab. July 2010. Archived from the original on 2010-07-24. Retrieved 2010-07-24.
  5. ^ "About CiteSeerX Data". Pennsylvania State University. Retrieved 2012-01-25.
  6. ^ For example, "CiteSeerx – DMCA Notice". CiteSeerX 10.1.1.604.4916. The document with the identifier "10.1.1.604.4916" has been removed due to a DMCA takedown notice. If you believe the removal has been in error, please contact us through the feedback page, along with the identifier mentioned in this page.

Further reading

  • Giles, C. Lee; Bollacker, Kurt D.; Lawrence, Steve (1998). "CiteSeer: an automatic citation indexing system". Proceedings of the Third ACM Conference on Digital Libraries. pp. 89–98. CiteSeerX 10.1.1.30.6847. doi:10.1145/276675.276685. ISBN 978-0-89791-965-4.

External links

Anomaly detection

In data mining, anomaly detection (also outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions.In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts in activity. This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular unsupervised methods) will fail on such data, unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal by looking for instances that seem to fit least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier (the key difference to many other statistical classification problems is the inherent unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behavior from a given normal training data set, and then test the likelihood of a test instance to be generated by the learnt model.

Artificial neural network

Artificial neural networks (ANN) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge about cats, for example, that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the learning material that they process.

An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.

In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called 'edges'. Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.

The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.

Association rule learning

Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. This rule-based approach also generates new rules as it analyzes more data. The ultimate goal, assuming a large enough dataset, is to help a machine mimic the human brain’s feature extraction and abstract association capabilities from new uncategorized data.

Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements.

In addition to the above example from market basket analysis association rules are employed today in many application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.

Dictator game

The dictator game is a popular experimental instrument in psychology and economics, a derivative of the ultimatum game. The term "game" is a misnomer because it captures a decision by a single player: to send money to another or not. The results – most players choose to send money – evidence the role of fairness and norms in economic behavior, and undermine the assumption of narrow self-interest.

Econometrica

Econometrica is a peer-reviewed academic journal of economics, publishing articles in many areas of economics, especially econometrics. It is published by Wiley-Blackwell on behalf of the Econometric Society. The current editor-in-chief is Joel Sobel.

Econometrica was established in 1933. Its first editor was Ragnar Frisch, recipient of the first Nobel Memorial Prize in Economic Sciences in 1969, who served as an editor from 1933 to 1954. Although Econometrica is currently published entirely in English, the first few issues also contained scientific articles written in French.

The Econometric Society aims to attract high-quality applied work in economics for publication in Econometrica through the Frisch Medal. This prize is awarded every two years for an empirical or theoretical applied article published in Econometrica during the past five years.

Einstein–Cartan–Evans theory

Einstein–Cartan–Evans theory or ECE theory was an attempted unified theory of physics proposed by the Welsh chemist and physicist Myron Wyn Evans (born May 26, 1950), which claimed to unify general relativity, quantum mechanics and electromagnetism. The hypothesis was largely published in the journal Foundations of Physics Letters between 2003 and 2005. Several of Evans' central claims were later shown to be mathematically incorrect and, in 2008, the new editor of Foundations of Physics, Nobel laureate Gerard 't Hooft, published an editorial note effectively retracting the journal's support for the hypothesis.

Experimental philosophy

Experimental philosophy is an emerging field of philosophical inquiry that makes use of empirical data—often gathered through surveys which probe the intuitions of ordinary people—in order to inform research on philosophical questions. This use of empirical data is widely seen as opposed to a philosophical methodology that relies mainly on a priori justification, sometimes called "armchair" philosophy, by experimental philosophers.

Experimental philosophy initially began by focusing on philosophical questions related to intentional action, the putative conflict between free will and determinism, and causal vs. descriptive theories of linguistic reference. However, experimental philosophy has continued to expand to new areas of research.

Disagreement about what experimental philosophy can accomplish is widespread. One claim is that the empirical data gathered by experimental philosophers can have an indirect effect on philosophical questions by allowing for a better understanding of the underlying psychological processes which lead to philosophical intuitions. Others claim that experimental philosophers are engaged in conceptual analysis, but taking advantage of the rigor of quantitative research to aid in that project. Finally, some work in experimental philosophy can be seen as undercutting the traditional methods and presuppositions of analytic philosophy. Several philosophers have offered criticisms of experimental philosophy.

Fast Fourier transform

A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

Fast Fourier transforms are widely used for applications in engineering, science, and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of 20th Century by the IEEE journal Computing in Science & Engineering. In practice, the computation time can be reduced by several orders of magnitude in such cases, and the improvement is roughly proportional to N log N. This huge improvement made the calculation of the DFT practical; FFTs are of great importance to a wide variety of applications, from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers.

The best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O(N log N) complexity for all N, even for prime N. Many FFT algorithms only depend on the fact that is an N-th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can easily be adapted for it.

Gödel Prize

The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Interest Group on Algorithms and Computational Theory (ACM SIGACT). The award is named in honor of Kurt Gödel. Gödel's connection to theoretical computer science is that he was the first to mention the "P versus NP" question, in a 1956 letter to John von Neumann in which Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time.The Gödel Prize has been awarded since 1993. The prize is awarded either at STOC (ACM Symposium on Theory of Computing, one of the main North American conferences in theoretical computer science) or ICALP (International Colloquium on Automata, Languages and Programming, one of the main European conferences in the field). To be eligible for the prize, a paper must be published in a refereed journal within the last 14 (formerly 7) years. The prize includes a reward of US$5000.The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT.

Introduction to Algorithms

Introduction to Algorithms is a book by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. The book has been widely used as the textbook for algorithms courses at many universities and is commonly cited as a reference for algorithms in published papers, with over 10,000 citations documented on CiteSeerX. The book sold half a million copies during its first 20 years. Its fame has led to the common use of the abbreviation "CLRS" (Cormen, Leiserson, Rivest, Stein), or, in the first edition, "CLR" (Cormen, Leiserson, Rivest).In the preface, the authors write about how the book was written to be comprehensive and useful in both teaching and professional environments. Each chapter focuses on an algorithm, and discusses its design techniques and areas of application. Instead of using a specific programming language, the algorithms are written in pseudocode. The descriptions focus on the aspects of the algorithm itself, its mathematical properties, and emphasize efficiency.

List of aperiodic sets of tiles

In geometry, a tiling is a partition of the plane (or any other geometric setting) into closed sets (called tiles), without gaps or overlaps (other than the boundaries of the tiles). A tiling is considered periodic if there exist translations in two independent directions which map the tiling onto itself. Such a tiling is composed of a single fundamental unit or primitive cell which repeats endlessly and regularly in two independent directions. An example of such a tiling is shown in the adjacent diagram (see the image description for more information). A tiling that cannot be constructed from a single primitive cell is called nonperiodic. If a given set of tiles allows only nonperiodic tilings, then this set of tiles is called aperiodic. The tilings obtained from an aperiodic set of tiles are often called aperiodic tilings, though strictly speaking it is the tiles themselves that are aperiodic. (The tiling itself is said to be "nonperiodic".)

The first table explains the abbreviations used in the second table. The second table contains all known aperiodic sets of tiles and gives some additional basic information about each set. This list of tiles is still incomplete.

Long short-term memory

Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a "general purpose computer" (that is, it can compute anything that a Turing machine can). It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.Bloomberg Business Week wrote: "These powers make LSTM arguably the most commercial AI achievement, used for everything from predicting diseases to composing music."A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.

LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the exploding and vanishing gradient problems that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications.

Memory-level parallelism

Memory-level parallelism (MLP) is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer (TLB) misses, at the same time.

In a single processor, MLP may be considered a form of instruction-level parallelism (ILP). However, ILP is often conflated with superscalar, the ability to execute more than one instruction at the same time, e.g. a processor such as the Intel Pentium Pro is five-way superscalar, with the ability to start executing five different microinstructions in a given cycle, but it can handle four different cache misses for up to 20 different load microinstructions at any time.

It is possible to have a machine that is not superscalar but which nevertheless has high MLP.

Arguably a machine that has no ILP, which is not superscalar, which executes one instruction at a time in a non-pipelined manner, but which performs hardware prefetching (not software instruction-level prefetching) exhibits MLP (due to multiple prefetches outstanding) but not ILP. This is because there are multiple memory operations outstanding, but not instructions. Instructions are often conflated with operations.

Furthermore, multiprocessor and multithreaded computer systems may be said to exhibit MLP and ILP due to parallelism—but not intra-thread, single process, ILP and MLP. Often, however, we restrict the terms MLP and ILP to refer to extracting such parallelism from what appears to be non-parallel single threaded code.

Pyramid (image processing)

Pyramid, or pyramid representation, is a type of multi-scale signal representation developed by the computer vision, image processing and signal processing communities, in which a signal or an image is subject to repeated smoothing and subsampling. Pyramid representation is a predecessor to scale-space representation and multiresolution analysis.

Recurrent neural network

A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.The term "recurrent neural network" is used indiscriminately to refer to two broad classes of networks with a similar general structure, where one is finite impulse and the other is infinite impulse. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.

Both finite impulse and infinite impulse recurrent networks can have additional stored state, and the storage can be under direct control by the neural network. The storage can also be replaced by another network or graph, if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks (LSTMs) and gated recurrent units.

Rendering (computer graphics)

Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model (or models in what collectively could be called a scene file) by means of computer programs. Also, the results of displaying such a model can be called a render. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.

Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output.

Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject.

Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development.

In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.

Scirus

Scirus was a comprehensive science-specific search engine, first launched in 2001. Like CiteSeerX and Google Scholar, it was focused on scientific information. Unlike CiteSeerX, Scirus was not only for computer sciences and IT and not all of the results included full text. It also sent its scientific search results to Scopus, an abstract and citation database covering scientific research output globally. Scirus was owned and operated by Elsevier.

In 2013 an announcement appeared, on the Scirus homepage, announcing the site's retirement in 2014:

"We are sad to say goodbye. Scirus is set to retire in early 2014. An official retirement date will be posted here as soon as it is determined. To ensure a smooth transition, we are informing you now so that you have sufficient time to find an alternative search solution for science-specific content. Thank you for being a devoted user of Scirus. We have enjoyed serving you."By February 2014, the Scirus homepage indicated that the service was no longer running.

Status quo bias

Status quo bias is an emotional bias; a preference for the current state of affairs. The current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss.

Status quo bias should be distinguished from a rational preference for the status quo ante, as when the current state of affairs is objectively superior to the available alternatives, or when imperfect information is a significant problem. A large body of evidence, however, shows that status quo bias frequently affects human decision-making.

Status quo bias should also be distinguished from psychological inertia, which refers to a lack of intervention in the current course of affairs. For example, consider a pristine lake where an industrial firm is planning to dump toxic chemicals. Status quo bias would involve avoiding change, and therefore intervening to prevent the firm from dumping toxic chemicals in the lake. Conversely, inertia would involve not intervening in the course of events that will change the lake.

Status quo bias interacts with other non-rational cognitive processes such as loss aversion, existence bias, endowment effect, longevity, mere exposure, and regret avoidance. Experimental evidence for the detection of status quo bias is seen through the use of the reversal test. A vast amount of experimental and field examples exist. Behaviour in regard to retirement plans, health, and ethical choices show evidence of the status quo bias.

Susan Athey

Susan Carleton Athey (born November 29, 1970) is an American economist. She is The Economics of Technology Professor at the Stanford Graduate School of Business. Prior to joining Stanford, she was a professor at Harvard University. She is the first female winner of the John Bates Clark Medal. She currently serves as a long-term consultant to Microsoft as well as a consulting researcher to Microsoft Research.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.