User interface

The user interface (UI), in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology.

Generally, the goal of user interface design is to produce a user interface which makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result. This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the human.

User interfaces are composed of one or more layers including a human-machine interface (HMI) interfaces machines with physical input hardware such a keyboards, mice, game pads and output hardware such as computer monitors, speakers, and printers. A device that implements a HMI is called a human interface device (HID). Other terms for human-machine interfaces are man–machine interface (MMI) and when the machine in question is a computer human–computer interface. Additional UI layers may interact with one or more human sense, including: tactile UI (touch), visual UI (sight), auditory UI (sound), olfactory UI (smell), equilibrial UI (balance), and gustatory UI (taste).

Composite user interfaces (CUI) are UIs that interact with two or more senses. The most common CUI is a graphical user interface (GUI), which is composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI it becomes a multimedia user interface (MUI). There are three broad categories of CUI: standard, virtual and augmented. Standard composite user interfaces use standard human interface devices like keyboards, mice, and computer monitors. When the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface. When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may also be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense (3S) Standard CUI with visual display, sound and smells; when virtual reality interfaces interface with smells and touch it is said to be a 4-sense (4S) virtual reality interface; and when augmented reality interfaces interface with smells and touch it is said to be a 4-sense (4S) augmented reality interface.

Reactable Multitouch
The Reactable, an example of a tangible user interface

Overview

The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch.

In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing, the term typically extends as well to the software dedicated to control the physical elements used for human-computer interaction.

The engineering of the human–machine interfaces is enhanced by considering ergonomics (human factors). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE), which is part of systems engineering.

Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics.

Terminology

Linux kernel INPUT OUPUT evdev gem USB framebuffer
A human–machine interface usually involves peripheral hardware for the INPUT and for the OUTPUT. Often, there is an additional component implemented in software, like e.g. a graphical user interface.

There is a difference between a user interface and an operator interface or a human–machine interface (HMI).

  • The term "user interface" is often used in the context of (personal) computer systems and electronic devices
    • Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host to display information.
    • A human-machine interface (HMI) is typically local to one machine or piece of equipment, and is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled.
    • The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons (limited set of functions, optimized for ease of use) and the other for library personnel (wide set of functions, optimized for efficiency).
  • The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface (HMI).[1] HMI is a modification of the original term MMI (man-machine interface).[2] In practice, the abbreviation MMI is still frequently used[2] although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more commonly used for human–computer interaction.[2] Other terms used are operator interface console (OIC) and operator interface terminal (OIT).[3] However it is abbreviated, the terms refer to the 'layer' that separates a human that is operating a machine from the machine itself.[2] Without a clean and usable interface, humans would not be able to interact with information systems.

In science fiction, HMI is sometimes used to refer to what is better described as direct neural interface. However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses—the artificial extension that replaces a missing body part (e.g., cochlear implants).[4][5]

In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces.[6][7]

History

The history of user interfaces can be divided into the following phases according to the dominant type of user interface:

1945–1968: Batch interface

LCM - IBM 029 Card Punch 01
IBM 029

In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible.

The input side of the user interfaces for batch machines was mainly punched cards or equivalent media like paper tape. The output side added line printers to these media. With the limited exception of the system operator's console, human beings did not interact with batch machines in real time at all.

Submitting a job to a batch machine involved, first, preparing a deck of punched cards describing a program and a dataset. Punching the program cards wasn't done on the computer itself, but on keypunches, specialized typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes meant to be parsed by the smallest possible compilers and interpreters.

Card puncher - NARA - 513295
Holes are punched in the card according to a prearranged code transferring the facts from the census questionnaire into statistics

Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or (all too often) an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation.

The turnaround time for a single job often spanned entire days. If one were very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards.

Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called “load-and-go” systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces.

1969–present: Command-line user interface

ASR-33 at CHM.agr
Teletype Model 33 ASR

Command-line interfaces (CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change his or her mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master.[8]

The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the Rule of Least Surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users.

DEC VT100 terminal
DEC VT100 terminal

The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s.

Just as importantly, the existence of an accessible screen — a two-dimensional display of text that could be rapidly and reversibly modified — made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue(6), and vi(1), are still a live part of Unix tradition.

1985: SAA User Interface or Text-Based User Interface

In 1985, with the beginning of Microsoft Windows and other graphical user interfaces, IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well.

This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard.[9]

1968–present: Graphical User Interface

AMX desk box
AMX Desk made a basic WIMP GUI
1989 - Linotype Wysiwyg 2000
Linotype WYSIWYG 2000, 1989
  • 1968 – Douglas Engelbart demonstrated NLS, a system which uses a mouse, pointers, hypertext, and multiple windows.[10]
  • 1970 – Researchers at Xerox Palo Alto Research Center (many from SRI) develop WIMP paradigm (Windows, Icons, Menus, Pointers)[10]
  • 1973 – Xerox Alto: commercial failure due to expense, poor user interface, and lack of programs[10]
  • 1979 – Steve Jobs and other Apple engineers visit Xerox PARC. Though Pirates of Silicon Valley dramatizes the events, Apple had already been working on developing a GUI, such as the Macintosh and Lisa projects, before the visit[11][12].
  • 1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing
  • 1984 – Apple Macintosh popularizes the GUI. Super Bowl commercial shown once, most expensive ever made at that time
  • 1984 – MIT's X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems
  • 1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead).
  • 1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows
  • 1986 – Apple threatens to sue Digital Research because their GUI desktop looked too much like Apple's Mac.
  • 1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements
  • 1987 – Macintosh II: first full-color Mac
  • 1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2

Interface design

Primary methods used in the interface design include prototyping and simulation.

Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping:

  • Common practices for interaction specification include user-centered design, persona, activity-oriented design, scenario-based design, resiliency design.
  • Common practices for interface software specification include use cases, constrain enforcement by interaction protocols (intended to avoid use errors).
  • Common practices for prototyping are based on interactive design based on libraries of interface elements (controls, decoration, etc.).

Quality

All great interfaces share eight qualities or characteristics:

  1. Clarity The interface avoids ambiguity by making everything clear through language, flow, hierarchy and metaphors for visual elements.
  2. Concision[13] It's easy to make the interface clear by over-clarifying and labeling everything, but this leads to interface bloat, where there is just too much stuff on the screen at the same time. If too many things are on the screen, finding what you're looking for is difficult, and so the interface becomes tedious to use. The real challenge in making a great interface is to make it concise and clear at the same time.
  3. Familiarity[14] Even if someone uses an interface for the first time, certain elements can still be familiar. Real-life metaphors can be used to communicate meaning.
  4. Responsiveness[15] A good interface should not feel sluggish. This means that the interface should provide good feedback to the user about what's happening and whether the user's input is being successfully processed.
  5. Consistency[16] Keeping your interface consistent across your application is important because it allows users to recognize usage patterns.
  6. Aesthetics While you don't need to make an interface attractive for it to do its job, making something look good will make the time your users spend using your application more enjoyable; and happier users can only be a good thing.
  7. Efficiency Time is money, and a great interface should make the user more productive through shortcuts and good design.
  8. Forgiveness A good interface should not punish users for their mistakes but should instead provide the means to remedy them.

Principle of least astonishment

The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time,[17] leading to the conclusion that novelty should be minimized.

Habit formation

If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface.[17][18]

Types

Hp150 touchscreen 20081129
HP Series 100 HP-150 Touchscreen
  • Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.
  • Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started.
  • Command line interfaces (CLIs) prompt the user to provide input by typing a command string with the computer keyboard and respond by outputting text to the computer monitor. Used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.
  • Conversational interfaces enable users to command the computer with plain text English (e.g., via text messages, or chatbots) or voice commands, instead of graphic elements. These interfaces often emulate human-to-human conversations.[19]
  • Conversational interface agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft's Clippy the paperclip), and present interactions in a conversational form.
  • Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing.
  • Direct manipulation interface is the name of a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond at least loosely to the physical world.
  • Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
  • Graphical user interfaces (GUI) accept input via devices such as a computer keyboard and mouse and provide articulated graphical output on the computer monitor. There are at least two different principles widely used in GUI design: Object-oriented user interfaces (OOUIs) and application-oriented interfaces.[20]
  • Hardware interfaces are the physical, spatial interfaces found on products in the real world from toasters, to car dashboards, to airplane cockpits. They are generally a mixture of knobs, buttons, sliders, switches, and touchscreens.
  • Holographic user interfaces provide input to electronic or electro-mechanical devices by passing a finger through reproduced holographic images of what would otherwise be tactile controls of those devices, floating freely in the air, detected by a wave source and without tactile interaction.
  • Intelligent user interfaces are human-machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).
  • Motion tracking interfaces monitor the user's body motions and translate them into commands, currently being developed by Apple.[21]
  • Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets.
  • Natural-language interfaces – Used for search engines and on webpages. User types in a question and waits for a response.
  • Non-command user interfaces, which observe the user to infer his/her needs and intentions, without requiring that he/she formulate explicit commands.[22]
  • Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties.
  • Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically, this is only possible with very rich graphic user interfaces.
  • Search interface is how the search box of a site is displayed, as well as the visual representation of the search results.
  • Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element.
  • Task-focused interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction.
  • Text-based user interfaces (TUIs) are user interfaces which interact via text. TUIs include command-line interfaces and text-based WIMP environments.
  • Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines, etc.
  • Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators, etc.
  • Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface.
  • Web-based user interfaces or web user interfaces (WUI) that accept input and provide output by generating web pages viewed by the user using a web browser program. Newer implementations utilize PHP, Java, JavaScript, AJAX, Apache Flex, .NET Framework, or similar technologies to provide real-time control in a separate program, eliminating the need to refresh a traditional HTML-based web browser. Administrative web interfaces for web-servers, servers and networked computers are often called control panels.
  • Zero-input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.[23]
  • Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail.

Gallery

P8-führerstand2

Historic HMI in the driver's cabin of a German steam locomotive

Ice3 leitstand

Modern HMI in the driver's cabin of a German Intercity-Express High-Speed Train

Wireless toilet control panel w. open lid

The HMI of a toilette (in Japan)

Engineer at audio console at Danish Broadcasting Corporation

HMI for audio mixing

00-bma-automation-operator-panel-with-pushbuttons

HMI of a machine for the sugar industry with pushbuttons

CNC panel Sinumerik

HMI for a Computer numerical control (CNC)

CNC panel

slightly newer HMI for a CNC-machine

Not-Aus Betätiger

emergency switch/panic switch

See also

References

  1. ^ Griffin, Ben; Baston, Laurel. "Interfaces" (Presentation): 5. Archived from the original on 14 July 2014. Retrieved 7 June 2014. The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human-machine interface (HMI).
  2. ^ a b c d "User Interface Design and Ergonomics" (PDF). Course Cit 811. NATIONAL OPEN UNIVERSITY OF NIGERIA: SCHOOL OF SCIENCE AND TECHNOLOGY: 19. Archived (PDF) from the original on 14 July 2014. Retrieved 7 June 2014. In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now.
  3. ^ "Introduction Section". Recent advances in business administration. [S.l.]: Wseas. 2010. p. 190. ISBN 978-960-474-161-8. Other terms used are operator interface console (OIC) and operator interface terminal (OIT)
  4. ^ Cipriani, Christian; Segil, Jacob; Birdwell, Jay; Weir, Richard (2014). "Dexterous control of a prosthetic hand using fine-wire intramuscular electrodes in targeted extrinsic muscles". IEEE Transactions on Neural Systems and Rehabilitation Engineering. 22 (4): 1–1. doi:10.1109/TNSRE.2014.2301234. ISSN 1534-4320. PMC 4501393. PMID 24760929. Archived from the original on 2014-07-14. Neural co-activations are present that in turn generate significant EMG levels and hence unintended movements in the case of the present human machine interface (HMI).
  5. ^ Citi, Luca (2009). "Development of a neural interface for the control of a robotic hand" (PDF). Scuola Superiore Sant'Anna, Pisa, Italy: IMT Institute for Advanced Studies Lucca: 5. Retrieved 7 June 2014.
  6. ^ Jordan, Joel. "Gaze Direction Analysis for the Investigation of Presence in Immersive Virtual Environments" (Thesis submitted for the degree of Doctor of Philosophy). University of London: Department of Computer Science: 5. Archived (PDF) from the original on 14 July 2014. Retrieved 7 June 2014. The aim of this thesis is to investigate the idea that the direction of gaze may be used as a device to detect a sense-of-presence in Immersive Virtual Environments (IVE) in some contexts.
  7. ^ Ravi (August 2009). "Introduction of HMI". Archived from the original on 14 July 2014. Retrieved 7 June 2014. In some circumstance computers might observe the user, and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces.
  8. ^ "HMI Guide". Archived from the original on 2014-06-20.
  9. ^ Richard, Stéphane. "Text User Interface Development Series Part One - T.U.I. Basics". Archived from the original on 16 November 2014. Retrieved 13 June 2014.
  10. ^ a b c McCown, Frank. "History of the Graphical User Interface (GUI)". Harding University. Archived from the original on 2014-11-08.
  11. ^ "The Xerox PARC Visit". web.stanford.edu. Retrieved 2019-02-08.
  12. ^ "apple-history.com / Graphical User Interface (GUI)". apple-history.com. Retrieved 2019-02-08.
  13. ^ Raymond, Eric Steven (2003). "11". The Art of Unix Programming. Thyrsus Enterprises. Archived from the original on 20 October 2014. Retrieved 13 June 2014.
  14. ^ C. A. D'H Gough; R. Green; M. Billinghurst. "Accounting for User Familiarity in User Interfaces" (PDF). Retrieved 13 June 2014.
  15. ^ Sweet, David (October 2001). "9 - Constructing A Responsive User Interface". KDE 2.0 Development. Sams Publishing. Archived from the original on 23 September 2013. Retrieved 13 June 2014.
  16. ^ John W. Satzinger; Lorne Olfman (March 1998). "User interface consistency across end-user applications: the effects on mental models". Journal of Management Information Systems. Managing virtual workplaces and teleworking with information technology. Armonk, NY. 14 (4): 167–193. doi:10.1080/07421222.1998.11518190.
  17. ^ a b Raskin, Jef (2000). The human interface : new directions for designing interactive systems (1. printing. ed.). Reading, Mass. [u.a.]: Addison Wesley. ISBN 0-201-37937-6.
  18. ^ Udell, John (9 May 2003). "Interfaces are habit-forming". Infoworld. Archived from the original on 4 April 2017. Retrieved 3 April 2017.
  19. ^ Errett, Joshua. "As app fatigue sets in, Toronto engineers move on to chatbots". CBC. CBC/Radio-Canada. Archived from the original on June 22, 2016. Retrieved July 4, 2016.
  20. ^ Gordana Lamb. "Improve Your UI Design Process with Object-Oriented Techniques" Archived 2013-08-14 at the Wayback Machine. Visual Basic Developer magazine. 2001. quote: "Table 1. Differences between the traditional application-oriented and object-oriented approaches to UI design."
  21. ^ appleinsider.com Archived 2009-06-19 at the Wayback Machine
  22. ^ Jakob Nielsen (April 1993). "Noncommand User Interfaces". Communications of the ACM. ACM Press. 36 (4): 83–99. doi:10.1145/255950.153582. Archived from the original on 2006-11-10.
  23. ^ Sharon, Taly, Henry Lieberman, and Ted Selker. "A zero-input interface for leveraging group experience in web browsing Archived 2017-09-08 at the Wayback Machine." Proceedings of the 8th international conference on Intelligent user interfaces. ACM, 2003.

External links

10-foot user interface

In computing, 10-foot user interface ("10-foot UI") is a graphical user interface designed for televisions. Compared to desktop computer and smartphone user interfaces, it uses text and other interface elements which are much larger in order to accommodate a typical television viewing distance of 10 feet (3 meters). Additionally, the limitations of a television's remote control necessitate extra user experience considerations to minimize user effort.

Command-line interface

A command-line interface or command language interpreter (CLI), also known as command-line user interface, console user interface and character user interface (CUI), is a means of interacting with a computer program where the user (or client) issues commands to the program in the form of successive lines of text (command lines). A program which handles the interface is called a command language interpreter or shell (computing).

The CLI was the primary means of interaction with most computer systems on computer terminals in the mid-1960s, and continued to be used throughout the 1970s and 1980s on OpenVMS, Unix systems and personal computer systems including MS-DOS, CP/M and Apple DOS. The interface is usually implemented with a command line shell, which is a program that accepts commands as text input and converts commands into appropriate operating system functions.

Today, many end users rarely, if ever, use command-line interfaces and instead rely upon graphical user interfaces and menu-driven interactions. However, many software developers, system administrators and advanced users still rely heavily on command-line interfaces to perform tasks more efficiently, configure their machine, or access programs and program features that are not available through a graphical interface.

Alternatives to the command line include, but are not limited to text user interface menus (see IBM AIX SMIT for example), keyboard shortcuts, and various other desktop metaphors centered on the pointer (usually controlled with a mouse). Examples of this include the Windows versions 1, 2, 3, 3.1, and 3.11 (an OS shell that runs in DOS), DosShell, and Mouse Systems PowerPanel.

Programs with command-line interfaces are generally easier to automate via scripting.

Command-line interfaces for software other than operating systems include a number of programming languages such as Tcl/Tk, PHP, and others, as well as utilities such as the compression utility WinZip, and some FTP and SSH/Telnet clients.

Cursor (user interface)

In computer user interfaces, a cursor is an indicator used to show the current position for user interaction on a computer monitor or other display device that will respond to input from a text input or pointing device. The mouse cursor is also called a pointer, owing to its resemblance in usage to a pointing stick.

Glade Interface Designer

Glade Interface Designer is a graphical user interface builder for GTK+, with additional components for GNOME. In its third version, Glade is programming language–independent, and does not produce code for events, but rather an XML file that is then used with an appropriate binding (such as GtkAda for use with the Ada programming language). See List of language bindings for GTK+ for the available ones.

Glade is free and open-source software distributed under the GNU General Public License.

Graphical user interface

The graphical user interface (GUI ) is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up display (HUD) is preferred), or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

Graphical user interface builder

A graphical user interface builder (or GUI builder), also known as GUI designer, is a software development tool that simplifies the creation of GUIs by allowing the designer to arrange graphical control elements (often called widgets) using a drag-and-drop WYSIWYG editor. Without a GUI builder, a GUI must be built by manually specifying each widget's parameters in source-code, with no visual feedback until the program is run.

User interfaces are commonly programmed using an event-driven architecture, so GUI builders also simplify creating event-driven code. This supporting code connects widgets with the outgoing and incoming events that trigger the functions providing the application logic.

Some graphical user interface builders, such as e.g. Glade Interface Designer, automatically generate all the source code for a graphical control element. Others, like Interface Builder, generate serialized object instances that are then loaded by the application.

Natural user interface

In computing, a natural user interface, or NUI, or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word natural is used because most computer interfaces use artificial control devices whose operation has to be learned.

An NUI relies on a user being able to quickly transition from novice to expert. While the interface requires learning, that learning is eased through design which gives the user the feeling that they are instantly and continuously successful. Thus, "natural" refers to a goal in the user experience – that the interaction comes naturally, while interacting with the technology, rather than that the interface itself is natural. This is contrasted with the idea of an intuitive interface, referring to one that can be used without previous learning.

Several design strategies have been proposed which have met this goal to varying degrees of success. One strategy is the use of a "reality user interface" ("RUI"), also known as "reality-based interfaces" (RBI) methods. One example of an RUI strategy is to use a wearable computer to render real-world objects "clickable", i.e. so that the wearer can click on any everyday object so as to make it function as a hyperlink, thus merging cyberspace and the real world. Because the term "natural" is evocative of the "natural world", RBI are often confused for NUI, when in fact they are merely one means of achieving it.

One example of a strategy for designing a NUI not based in RBI is the strict limiting of functionality and customization, so that users have very little to learn in the operation of a device. Provided that the default capabilities match the user's goals, the interface is effortless to use. This is an overarching design strategy in Apple's iOS. Because this design is coincident with a direct-touch display, non-designers commonly misattribute the effortlessness of interacting with the device to that multi-touch display, and not to the design of the software where it actually resides.

PARC (company)

PARC (Palo Alto Research Center; formerly Xerox PARC) is a research and development company in Palo Alto, California, with a distinguished reputation for its contributions to information technology and hardware systems.Founded by Jacob E. "Jack" Goldman, Xerox Corporation's chief scientist, in 1970, Xerox PARC has been in large part responsible for such developments as laser printing, Ethernet, the modern personal computer, graphical user interface (GUI) and desktop paradigm, object-oriented programming, ubiquitous computing, electronic paper, amorphous silicon (a-Si) applications, and advancing very-large-scale integration (VLSI) for semiconductors.

Xerox formed Palo Alto Research Center Incorporated as a wholly owned subsidiary in 2002.

Shell (computing)

In computing, a shell is a user interface for access to an operating system's services. In general, operating system shells use either a command-line interface (CLI) or graphical user interface (GUI), depending on a computer's role and particular operation. It is named a shell because it is the outermost layer around the operating system kernel.CLI shells require the user to be familiar with commands and their calling syntax, and to understand concepts about the shell-specific scripting language (for example bash script). They are also more easily operated via refreshable braille display, and provide certain advantages to screen readers.

Graphical shells place a low burden on beginning computer users, and are characterized as being easy to use. Since they also come with certain disadvantages, most GUI-enabled operating systems also provide CLI shells.

Text-based user interface

Text-based user interface (TUI), also called textual user interface or terminal user interface, is a retronym coined sometime after the invention of graphical user interfaces (GUI). TUIs display computer graphics in text mode. An advanced TUI may, like GUIs, use the entire screen area and accept mouse and other inputs.

Tooltip

The tooltip or infotip or a hint is a common graphical user interface element. It is used in conjunction with a cursor, usually a pointer. The user hovers the pointer over an item, without clicking it, and a tooltip may appear—a small "hover box" with information about the item being hovered over. Tooltips do not usually appear on mobile operating systems, because there is no cursor (though tooltips may be displayed when using a mouse).

User interface design

User interface design (UI) or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design).

Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.

Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether that be software design, user research, web design, or industrial design.

Voice user interface

A voice-user interface (VUI) makes spoken human interaction with computers possible, using speech recognition to understand spoken commands and questions, and typically text to speech to play a reply. A voice command device (VCD) is a device controlled with a voice user interface.

Voice user interfaces have been added to automobiles, home automation systems, computer operating systems, home appliances like washing machines and microwave ovens, and television remote controls. They are the primary way of interacting with virtual assistants on smartphones and smart speakers. Older automated attendants (which route phone calls to the correct extension) and interactive voice response systems (which conduct more complicated transactions over the phone) can respond to the pressing of keypad buttons via DTMF tones, but those with a full voice user interface allow callers to speak requests and responses without having to press any buttons.

Newer VCDs are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences. They are also capable of responding to several commands at once, separating vocal messages, and providing appropriate feedback, accurately imitating a natural conversation.

Web application

In computing, a web application or web app is a client–server computer program which the client (including the user interface and client-side logic) runs in a web browser. Common web applications include webmail, online retail sales, and online auction.

Widget (GUI)

A control element (sometimes called a control or widget) in a graphical user interface is an element of interaction, such as a button or a scroll bar. Controls are software components that a computer user interacts with through direct manipulation to read or edit information about an application. User interface libraries such as Windows Presentation Foundation, GTK+, and Cocoa, contain a collection of controls and the logic to render these.Each widget facilitates a specific type of user-computer interaction, and appears as a visible part of the application's GUI as defined by the theme and rendered by the rendering engine. The theme makes all widgets adhere to a unified aesthetic design and creates a sense of overall cohesion. Some widgets support interaction with the user, for example labels, buttons, and check boxes. Others act as containers that group the widgets added to them, for example windows, panels, and tabs.

Structuring a user interface with widget toolkits allows developers to reuse code for similar tasks, and provides users with a common language for interaction, maintaining consistency throughout the whole information system.

Graphical user interface builders facilitate the authoring of GUIs in a WYSIWYG manner employing a user interface markup language. They automatically generate all the source code for a widget from general descriptions provided by the developer, usually through direct manipulation.

Widget toolkit

A widget toolkit, widget library, GUI toolkit, or UX library is a library or a collection of libraries containing a set of graphical control elements (called widgets) used to construct the graphical user interface (GUI) of programs.

Most widget toolkits additionally include their own rendering engine. This engine can be specific to a certain operating system or windowing system or contain back-ends to interface with more multiple ones and also with rendering APIs such as OpenGL, OpenVG, or EGL.

The look and feel of the graphical control elements can be hard-coded or decoupled, allowing the graphical control elements to be themed/skinned.

Window (computing)

In computing, a window is a graphical control element. It consists of a visual area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. It usually has a rectangular shape that can overlap with the area of other windows. It displays the output of and may allow input to one or more processes.

Windows are primarily associated with graphical displays, where they can be manipulated with a pointer by employing some kind of pointing device. Text-only displays can also support windowing, as a way to maintain multiple independent display areas, such as multiple buffers in Emacs. Text windows are usually controlled by keyboard, though some also respond to the mouse.

A graphical user interface (GUI) using windows as one of its main "metaphors" is called a windowing system, whose main components are the display server and the window manager.

XUL

XUL ( ZOOL), which stands for XML User Interface Language, is a user interface markup language developed by Mozilla. XUL is implemented as an XML dialect, enabling graphical user interfaces to be written in a similar manner to web pages. Such applications must be created using the Mozilla codebase (or a fork of it); the most prominent example is the Firefox web browser.

In the past, Firefox permitted add-ons to extensively alter its user interface via custom XUL code, but this capability was removed in Firefox 57 and replaced with the less-permissive WebExtensions API. (Three forks of Firefox still support the legacy capability: Pale Moon, Basilisk, Waterfox.)

Zooming user interface

In computing, a zooming user interface or zoomable user interface (ZUI, pronounced zoo-ee) is a graphical environment where users can change the scale of the viewed area in order to see more detail or less, and browse through different documents. A ZUI is a type of graphical user interface (GUI). Information elements appear directly on an infinite virtual desktop (usually created using vector graphics), instead of in windows. Users can pan across the virtual surface in two dimensions and zoom into objects of interest. For example, as you zoom into a text object it may be represented as a small dot, then a thumbnail of a page of text, then a full-sized page and finally a magnified view of the page.

ZUIs use zooming as the main metaphor for browsing through hyperlinked or multivariate information.

Objects present inside a zoomed page can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom.

When the level of detail present in the resized object is changed to fit the relevant information into the current size, instead of being a proportional view of the whole object, it's called semantic zooming.Some consider the ZUI paradigm as a flexible and realistic successor to the traditional windowing GUI, being a Post-WIMP interface.

General
Kernel
Process management
Memory management and
resource protection
Storage access and
file systems
List
Miscellaneous concepts
Hardware
Computer systems
organization
Networks
Software organization
Software notations
and tools
Software development
Theory of computation
Algorithms
Mathematics
of computing
Information
systems
Security
Human–computer
interaction
Concurrency
Artificial
intelligence
Machine learning
Graphics
Applied
computing
Websites
Tools
General
Applications
User interface
Implications
Related concepts
Protocols
Defunct

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.