Adaptive hypermedia (AH) uses hypermedia which is adaptive according to a user model. In contrast to linear media, where all users are offered a standard series of hyperlinks, adaptive hypermedia (AH) tailors what the user is offered based on a model of the user's goals, preferences and knowledge, thus providing links or content most appropriate to the current user.
Adaptive hypermedia is used in educational hypermedia, on-line information and help systems, as well as institutional information systems. Adaptive educational hypermedia tailors what the learner sees to that learner's goals, abilities, needs, interests, and knowledge of the subject, by providing hyperlinks that are most relevant to the user in an effort to shape the user's cognitive load. The teaching tools "adapt" to the learner. On-line information systems provide reference access to information for users with a different knowledge level of the subject.
An adaptive hypermedia system should satisfy three criteria: it should be a hypertext or hypermedia system, it should have a user model and it should be able to adapt the hypermedia using the model.
A semantic distinction is made between adaptation, referring to system-driven changes for personalisation, and adaptability, referring to user-driven changes. One way of looking at this is that adaptation is automatic, whereas adaptability is not. From an epistemic point of view, adaptation can be described as analytic, a-priori, whereas adaptability is synthetic, a-posteriori. In other words, any adaptable system, as it "contains" a human, is by default "intelligent", whereas an adaptive system that presents "intelligence" is more surprising and thus more interesting.
The system categories in which user modelling and adaptivity have been deployed by various researchers in the field share an underlying architecture. The conceptual structure for adaptive systems generally consists of interdependent components: a user model, a domain model and an interaction model.
The user model is a representation of the knowledge and preferences which the system 'believes' a user (which may be an individual, a group of people or non-human agents) possesses. It is a knowledge source which is separable by the system from the rest of its knowledge and contains explicit assumptions about the user. Knowledge for the user model can be acquired implicitly by making inferences about users from their interaction with the system, by carrying out some form of test, or from assigning users to generic user categories usually called 'stereotypes'. The student model consists of a personal profile (which includes static data, e.g., name and password), cognitive profile (adaptable data such as preferences), and a student knowledge profile. Systems may adapt, depending on user features such as:
The domain model defines the aspects of the application which can be adapted or which are otherwise required for the operation of the adaptive system. The domain model contains several concepts that stand as the backbone for the content of the system. Other terms which have been used for this concept include content model, application model, system model, device model and task model. It describes educational content such as information pages, examples, and problems. The simplest content model relates every content item to exactly one domain concept (in this model, this concept is frequently referred to as a domain topic). More advanced content models use multi-concept indexing for each content item and sometimes use roles to express the nature of item-concept relationship. A cognitively valid domain model should capture descriptions of the application at three levels, namely:
Each content concept has a set of topics. Topics represent individual pieces of knowledge for each domain and the size of each topic varies in relation to the particular domain. Additionally, topics are linked to each other forming a semantic network. This network is the structure of the knowledge domain.
The interaction or adaptation model contains everything which is concerned with the relationships which exist between the representation of the users (the user model) and the representation of the application (the domain model). It displays information to the user based on his or her cognitive preferences. For instance, the module will divide a page's content into chunks with conditions set to only display to certain users or preparing two variants of a single concept page with a similar condition. The two main aspects to the interaction model are capturing the appropriate raw data and representing the inferences, adaptations and evaluations which may occur.
Content-level and link-level adaptation are distinguished as two different classes of hypermedia adaptation; the first is termed adaptive presentation and the second, adaptive navigation support.
The idea of various adaptive presentation techniques is to adapt the content of a page accessed by a particular user to current knowledge, goals, and other characteristics of the user. For example, a qualified user can be provided with more detailed and deep information while a novice can receive additional explanations. Adaptive text presentation is the most studied technology of hypermedia adaptation. There are a number of different techniques for adaptive text presentation.
The idea of adaptive navigation support techniques is to help users to find their paths in hyperspace by adapting the way of presenting links to goals, knowledge, and other characteristics of an individual user. This area of research is newer than adaptive presentation, a number of interesting techniques have been already suggested and implemented. We distinguish four kinds of link presentation which are different from the point of what can be altered and adapted:
Adaptation methods are defined as generalizations of existing adaptation techniques. Each method is based on a clear adaptation idea which can be presented at the conceptual level.
Adaptation techniques refer to methods of providing adaptation in existing AH systems.
Authoring adaptive hypermedia uses designing and creation processes for content, usually in the form of a resource collection and domain model, and adaptive behaviour, usually in the form of IF-THEN rules. Recently, adaptation languages have been proposed for increased generality. As adaptive hypermedia adapts at least to the user, authoring of AH comprises at least a user model, and may also include other aspects.
Authoring of adaptive hypermedia was long considered as secondary to adaptive hypermedia delivery. This was not surprising in the early stages of adaptive hypermedia, when the focus was on research and expansion. Now that adaptive hypermedia itself has reached a certain maturity, the issue is to bring it out to the community and let the various stakeholders reap the benefits. However, authoring and creation of hypermedia is not trivial. Unlike in traditional authoring for hypermedia and the web, a linear storyline is not enough. Instead, various alternatives have to be created for the given material. For example, if a course should be delivered both to visual and verbal learners, there should be created at least two perfectly equivalent versions of the material in visual and in verbal form, respectively. Moreover, an adaptation strategy should be created that states that the visual content should be delivered to visual learners, whereas the verbal content should be delivered to the verbal learners. Thus, authors should not only be able to create different versions of their content, but be able to specify (and in some cases, design from scratch) adaptation strategies of delivery of contents. Issues with which authoring of adaptive hypermedia is confronted are:
There already exist some approaches to help authors to build adaptive-hypermedia-based systems. However, there is a strong need for high-level approaches, formalisms and tools that support and facilitate the description of reusable adaptive hypermedia and websites. Such models started appearing (see, e.g., the AHAM model of adaptive hypermedia, or the LAOS framework for authoring of adaptive hypermedia). Moreover, recently have we noticed a shift in interest, as it became clearer that the implementation-oriented approach would forever keep adaptive hypermedia away from the 'layman' author. The creator of adaptive hypermedia cannot be expected to know all facets of the process as described above. Still, he/she can be reasonably trusted to be an expert in one of these facets. For instance, it is reasonable to expect that there are content experts (such as, e.g., experts in chemistry, for instance). It is reasonable to expect, for adaptive educational hypermedia that there are experts in pedagogy, who are able to add pedagogical metadata to the content created by content experts. Finally, it is reasonable to expect that adaptation experts will be the one creating the implementation of adaptation strategies, and descriptions (metadata) of such nature that they can be understood and applied by laymen authors. This type of division of work determines the different authoring personas that should be expected to collaborate in the creation process of adaptive hypermedia. Moreover, the contributions of these various personas correspond to the different modules that are to be expected in adaptive hypermedia systems.
By the early 1990s, the two main parent areas – hypertext and user modeling – had achieved a level of maturity that allowed for the research in these areas to be explored together. Many researchers had recognized the problems of static hypertext in different application areas, and explored various ways to adapt the output and behavior of hypertext systems to suit the needs of individual users. Several early papers on adaptive hypermedia were published in the User Modeling and User-Adapted Interaction (UMUAI) journal; the first workshop on adaptive hypermedia was held during a user modeling conference; and a special issue of UMUAI on adaptive hypermedia was published in 1996. Several innovative adaptive hypermedia techniques had been developed, and several research-level adaptive hypermedia systems had been built and evaluated.
After 1996, adaptive hypermedia grew rapidly. Research teams commenced projects in adaptive hypermedia, and many students selected the subject area for their PhD theses. A book on adaptive hypermedia, and a special issue of the New Review of Hypermedia and Multimedia (1998) were published. Two main factors accounted for this growth. Due a diverse audience, the internet boosted research into adaptivity. Almost all the papers published before 1996 describe classic pre-Web hypertext and hypermedia; the majority of papers published since 1996 are devoted to Web-based adaptive hypermedia systems. The second factor was is the accumulation and consolidation of research experience in the field. Early papers provided few references to similar work in adaptive hypermedia, and described original laboratory systems developed to demonstrate and explore innovative ideas. After 1996, papers cite earlier work, and usually suggest either real world systems, or research systems developed for real world settings by elaborating or an extending techniques suggested earlier. This is indicative of the relative maturity of adaptive hypermedia as a research direction.
Adaptive hypermedia and user modeling continue to be actively researched, with results published in several journals and conferences such as:
The term “adaptation” in computer science refers to a process where an interactive system (adaptive system) adapts its behaviour to individual users based on information acquired about its user(s) and its environment.Attention management
Attention management refers to models and tools for supporting the management of attention at the individual or at the collective level (cf. attention economy), and at the short-term (quasi real time) or at a longer term (over periods of weeks or months).
The researcher Herbert A. Simon pointed out that when there is a vast availability of information, attention becomes the more scarce resource as human beings cannot digest all the information.According to Maura Thomas, attention management is the most important skill for the 21st century. With digital revolution and the advent of internet and communication devices, time management is no longer enough to guarantee a good quality of work. Allocating time to perform one activity does not mean that it will receive attention if constant interruptions and distractions come across. Therefore, people should stop worrying about time management and focus on attention management.The ability to control distractions and stay focused is essential to produce higher quality results. A research conducted by Stanford shows that single-tasking is more effective and productive than multi-tasking. Different studies have been conducted in using Information and Communications Technology (ICT) for supporting attention, and in particular, models have been elaborated for supporting attention (Davenport & Beck 2001) (Roda & Nabeth 2008).
Supporting the management of attention the objective is to bring a certain number of solutions to:
people perception cognitive limitations, such as the limited capacity of the human short-term memory (an average number of 4 items (Cowan 2001) can be managed at a given time), or the theoretical cognitive limit to the number of people with whom one can maintain stable social relationships (the Dunbar's number of 150).
social interaction overload (that may for instance originate from the online social networking services from which people get a lot of solicitations)
interruption (Kebinger 2005)
multitasking (Rosen 2008)Tools can be designed for supporting attention
at the organizational level, by supporting organization processes (Apostolou, Karapiperis & Stojanovic 2008)
at the collective level
at the individual level, for instance using attentive user interfaces (Vertegaal 2003) (Vertegaal et al. 2006) (Huberman & Wu 2008).
at the individual level by helping people to assess and analyze their attention related practices (for instance with the tool AttentionScape (Davenport,Beck 2001)).These tools are often adaptive hypermedia, and often rely on profiling the user (Nabeth 2008) in order determine how to better support people's attention.Attentive user interface
Attentive user interfaces (AUI) are user interfaces that manage the user's attention. For instance, an AUI can manage notifications (Horvitz et al. 2003), deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.
Attentive user interfaces, by generating only the relevant information, can in particular be used to display information in a way that increase the effectiveness of the interaction (Huberman & Wu 2008).
According to Vertegaal, there are four main types of attentive user interfaces (Vertegaal 2003) (Vertegaal et al. 2006):
Interruption decision interfaces
Visual detail management interfacesCollaborative search engine
Collaborative search engines (CSE) are Web search engines and enterprise searches within company intranets that let users combine their efforts in information retrieval (IR) activities, share information resources collaboratively using knowledge tags, and allow experts to guide less experienced people through their searches. Collaboration partners do so by providing query terms, collective tagging, adding comments or opinions, rating search results, and links clicked of former (successful) IR activities to users having the same or a related information need.Educational entertainment
Educational entertainment (also referred to by the portmanteau edutainment) is media designed to educate through entertainment. Most often it includes content intended to teach but has incidental entertainment value. It has been used by academia, corporations, governments, and other entities in various countries to disseminate information in classrooms and/or via television, radio, and other media to influence viewers' opinions and behaviors.Genieo
Genieo Innovation is an Israeli company, specializing in unwanted software which includes advertising and user tracking software, commonly referred to as a potentially unwanted program, adware, privacy-invasive software, grayware, or malware. They are best known for Genieo, an application of this type. They also own and operate InstallMac which distributes additional 'optional' search modifying software with other applications. In 2014, Genieo Innovation was acquired for $34 million by Somoto, another company which "bundles legitimate applications with offers for additional third party applications that may be unwanted by the user". This sector of the Israeli software industry is frequently referred to as Download Valley.Intelligent tutoring system
An intelligent tutoring system (ITS) is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. ITSs have the common goal of enabling learning in a meaningful and effective manner by using a variety of computing technologies. There are many examples of ITSs being used in both formal education and professional settings in which they have demonstrated their capabilities and limitations. There is a close relationship between intelligent tutoring, cognitive learning theories and design; and there is ongoing research to improve the effectiveness of ITS. An ITS typically aims to replicate the demonstrated benefits of one-to-one, personalized tutoring, in contexts where students would otherwise have access to one-to-many instruction from a single teacher (e.g., classroom lectures), or no teacher at all (e.g., online homework). ITSs are often designed with the goal of providing access to high quality education to each and every student.International Conference on User Modeling, Adaptation, and Personalization
The International Conference on User Modeling, Adaptation, and Personalization (UMAP) is the oldest international conference for researchers and practitioners working on various kinds of user-adaptive computer systems such as Adaptive hypermedia systems, Recommender systems, Adaptive websites, Adaptive learning, Personalized learning and Intelligent tutoring systems and Personalized search systems. All of these systems adapt to their individual users, or to groups of users (i.e., Personalization).
To achieve this goal, they collect and represent information about users or groups (i.e., User modeling). The UMAP conferences have historically been organized under the auspices of User Modeling Inc., a professional organization of User Modeling researchers. Until 2015, the conference proceedings were published by Springer. In 2016, the UMAP conference series became affiliated with the Association for Computing Machinery (ACM), where it is supported by ACM SIGWEB and ACM SIGCHI.Learning analytics
Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. A related field is educational data mining.MediaWiki
MediaWiki is a free and open-source wiki engine. It was developed for use on Wikipedia in 2002, and given the name "MediaWiki" in 2003. It remains in use on Wikipedia and almost all other Wikimedia websites, including Wiktionary, Wikimedia Commons and Wikidata; these sites continue to define a large part of the requirement set for MediaWiki. MediaWiki was originally developed by Magnus Manske and improved by Lee Daniel Crocker. Its development has since then been coordinated by the Wikimedia Foundation.
MediaWiki is written in the PHP programming language and stores all text content into a database. The software is optimized to efficiently handle large projects, which can have terabytes of content and hundreds of thousands of hits per second. Because Wikipedia is one of the world's largest websites, achieving scalability through multiple layers of caching and database replication has been a major concern for developers. Another major aspect of MediaWiki is its internationalization; its interface is available in more than 300 languages. The software has more than 900 configuration settings and more than 1,900 extensions available for enabling various features to be added or changed.Besides its use on Wikimedia sites, MediaWiki has been used as a knowledge management and content management system on many thousands of websites, public and private, including the websites Fandom and wikiHow, and major internal installations like Intellipedia and Diplopedia.MediaWiki extension
MediaWiki extensions allow MediaWiki to be made more advanced and useful for various purposes. These extensions vary greatly in complexity. The Wikimedia Foundation operates a Git server where many extensions are hosted, and a directory of them can be found on the MediaWiki website. Some other sites also are known for development of—or support for—extensions are MediaWiki.org, which maintains an extension matrix; and Google Code. MediaWiki code review is itself facilitated through a Gerrit instance. Since version 1.16 MediaWiki also used the jQuery library.Personal knowledge management
Personal knowledge management (PKM) is a collection of processes that a person uses to gather, classify, store, search, retrieve and share knowledge in their daily activities (Grundspenkis 2007) and the way in which these processes support work activities (Wright 2005). It is a response to the idea that knowledge workers need to be responsible for their own growth and learning (Smedley 2009). It is a bottom-up approach to knowledge management (KM) (Pollard 2008).Personalization
Personalization (broadly known as customization) consists of tailoring a service or a product to accommodate specific individuals, sometimes tied to groups or segments of individuals. A wide variety of organizations use personalization to improve customer satisfaction, digital sales conversion, marketing results, branding, and improved website metrics as well as for advertising. Personalization is a key element in social media and recommender systems.Peter Brusilovsky
Peter Brusilovsky is a professor of Information science
and Intelligent Systems (Artificial intelligence) at the University of Pittsburgh. He is known as one of the pioneers of Adaptive hypermedia, Adaptive Web,
and Web-based Adaptive learning
He also published numerous articles in user modeling, personalization, educational technology, intelligent tutoring systems, and information access. Brusilovsky is ranked as #1 in the world in the area of Computer Education and #21 in the world in the area of World Wide Web by Microsoft Academic Search. According to Google Scholar, he has over 25,000 citations and h-index of 67. Brusilovsky's group has been awarded best paper awards at Adaptive Hypermedia, User Modeling, Hypertext, IUI, ICALT, and EC-TEL conference series. Among these awards are five prestigious James Chen Best Student paper awards.Brusilovsky studied applied mathematics and computer science at the Moscow State University. His doctoral advisor was Lev Nikolayevich Korolyov. He received postdoctoral training at University of Sussex, University of Trier, and Carnegie Mellon University under the guidance of Ben du Boulay, Gerhard Weber, and John Anderson. This research was supported by fellowships from Royal Society, Alexander von Humboldt Foundation, and James S. McDonnell Foundation. Since 2000 he worked as an Assistant Professor, Associate Professor, and Full Professor at the University of Pittsburgh School of Computing and Information (formerly School of Information Sciences). He also served as Founding Associate Editor-in-Chief (2007-2012) and Editor-in-Chief (2013-2018) of IEEE Transactions on Learning Technologies. Brusilovsky is a recipient of NSF CAREER Award, SFI ETS Walton Visitor Award, and Fulbright-Nokia Distinguished Chair in Information and Communications Technologies. He also holds a honoris causa degree from Slovak University of Technology in Bratislava.Brusilovsky coined the term "explorable explanation" for media that uses interactive models to communicate scientific ideas.Profiling (information science)
In information science, profiling refers to the process of construction and application of user profiles generated by computerized data analysis.
This is the use of algorithms or other mathematical techniques that allow the discovery of patterns or correlations in large quantities of data, aggregated in databases. When these patterns or correlations are used to identify or represent people, they can be called profiles. Other than a discussion of profiling technologies or population profiling, the notion of profiling in this sense is not just about the construction of profiles, but also concerns the application of group profiles to individuals, e. g., in the cases of credit scoring, price discrimination, or identification of security risks (Hildebrandt & Gutwirth 2008) (Elmer 2004).
Profiling is not simply a matter of computerized pattern-recognition; it enables refined price-discrimination, targeted servicing, fraud detection, and extensive social sorting. Real-time machine profiling constitutes the precondition for emerging socio-technical infrastructures envisioned by advocates of ambient intelligence, autonomic computing (Kephart & Chess 2003) and ubiquitous computing (Weiser 1991).
One of the most challenging problems of the information society involves dealing with increasing data-overload. With the digitizing of all sorts of content as well as the improvement and drop in cost of recording technologies, the amount of available information has become enormous and increases exponentially. It has thus become important for companies, governments, and individuals to discriminate information from noise, detecting useful or interesting data. The development of profiling technologies must be seen against this background. These technologies are thought to efficiently collect and analyse data in order to find or test knowledge in the form of statistical patterns between data. This process, called Knowledge Discovery in Databases (KDD) (Fayyad, Piatetsky-Shapiro & Smyth 1996), provides the profiler with sets of correlated data usable as "profiles".Simulation
A simulation is an approximate imitation of the operation of a process or system; the act of simulating first requires a model is developed. This model is a well-defined description of the simulated subject, and represents its key characteristics, such as its behaviour, functions and abstract or physical properties. The model represents the system itself, whereas the simulation represents its operation over time.
Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Often, computer experiments are used to study simulation models. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist.Key issues in simulation include the acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes. Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement, research and development in simulations technology or practice, particularly in the field of computer simulation.User model
User model may refer to:
User interface modelingUser modeling
User modeling is the subdivision of human–computer interaction which describes the
process of building up and modifying a conceptual understanding of the user. The main goal of user modeling is customization and adaptation of systems to the user's specific needs. The system needs to "say the 'right' thing at the 'right' time in the 'right' way". To do so it needs an internal representation of the user. Another common purpose is modeling specific kinds of users, including modeling of their skills and declarative knowledge, for use in automatic software-tests. User-models can thus serve as a cheaper alternative to user testing.Web engineering
The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behaviour and place some unique demands on their usability, performance, security, and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years, there have been developments towards addressing these considerations.
Web engineering focuses on the methodologies, techniques, and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone nor a subset of software engineering, although both involve programming and software development. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications.