In electronics engineering, video processing is a particular case of signal processing, in particular image processing, which often employs video filters and where the input and output signals are video files or video streams. Video processing techniques are used in television sets, VCRs, DVDs, video codecs, video players, video scalers and other devices. For example—commonly only design and video processing is different in TV sets of different manufactures.
These can either be in chip form, or as a stand-alone unit to be placed between a source device (like a DVD player or set-top-box) and a display with less-capable processing. The most widely recognized video processor companies in the market are:
All of these companies' chips are in devices ranging from DVD upconverting players (for Standard Definition) to HD DVD/Blu-ray Disc players and set-top boxes, to displays like plasmas, DLP (both front and rear projection), LCD (both flat-panels and projectors), and LCOS/"SXRD". Their chips are also becoming more available in stand alone devices (see "External links" below for links to a few of these).
Avidemux is a free and open-source video editing program designed for video editing and video processing. It is written in C++, and uses either GTK+ or Qt for its user interface.Core Video
Core Video is the video processing model employed by macOS. It links the process of decompressing frames from a video source to the rest of the Quartz technologies for image rendering and composition. Both QuickTime X and QuickTime 7 depend on Core Video.Deblocking filter
A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used. The filter aims to improve the appearance of decoded pictures. It is a part of the specification for both the SMPTE VC-1 codec and the ITU H.264 (ISO MPEG-4 AVC) codec.Deflicking
In video processing, deflicking is a filtering operation applied to brightness flicker in video to improve visual quality. The flicker effect can be seen when camera framerate and lighting frequency are not adjusted or in video digitized old film. The filter aims to improve the appearance of movies.
The main idea is to smooth image brightness between series of the same scene frames.
The deflicking filter is usually used in video camera (for normalizing picture), used for postprocessing of captured video, and for restoration of video from old films.Deinterlacing
Deinterlacing is the process of converting interlaced video, such as common analog television signals or 1080i format HDTV signals, into a non-interlaced form.
An interlaced video frame consists of two sub-fields taken in sequence, each sequentially scanned at odd, and then even, lines of the image sensor. Analog television employed this technique because it allowed for less transmission bandwidth and further eliminated the perceived flicker that a similar frame rate would give using progressive scan. CRT-based displays were able to display interlaced video correctly due to their complete analogue nature. Newer displays are inherently digital, in that the display comprises discrete pixels. Consequently, the two fields need to be combined into a single frame, which leads to various visual defects. The deinterlacing process should try to minimize these.
Deinterlacing has been researched for decades and employs complex processing algorithms; however, consistent results have been very hard to achieve.Digital video fingerprinting
Video fingerprinting is a technique in which software identifies, extracts, and then summarizes characteristic components of a video recording, enabling that video to be uniquely identified by its resultant "fingerprint". This technology has proven to be effective at identifying and comparing digital video data.Filter (video)
A video filter is a software component that performs some operation on a multimedia stream. Multiple filters can be used in a chain, known as a filter graph, in which each filter receives input from its upstream filter, processes the input and outputs the processed video to its downstream filter.
With regards to video encoding three categories of filters can be distinguished:
prefilters: used before encoding
intrafilters: used while encoding (and are thus an integral part of a video codec)
postfilters: used after decodingGraphics processing unit
A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose CPUs for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.The term GPU has been used from at least the 1980s. It was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU". It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002.Internet Download Manager
Internet Download Manager (also called IDM) is a shareware download manager. It is only available for the Microsoft Windows operating system.
Internet download manager (IDM) is a tool to manage and schedule downloads. It can use full bandwidth. It has recovery and resume capabilities to restore the interrupted downloads due to lost connection, network issues, and power outages.
IDM supports a wide range of proxy servers such as firewall, FTP, and HTTP protocols, redirected cookies, MP3 audio and MPEG video processing. It efficiently collaborates with Opera, Avant Browser, AOL, MSN Explorer, Netscape, MyIE2, and other popular browsers to manage the download.Motion estimation
Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion is in three dimensions but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.Multiplexing
In telecommunications and computer networks, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now widely applied in communications. In telephony, George Owen Squier is credited with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end.
A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX or DMX).
Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream.Scientific Working Group – Imaging Technology
The Scientific Working Group on Imaging Technology was convened by the Federal Bureau of Investigation in 1997 to provide guidance to law enforcement agencies and others in the criminal justice system regarding the best practices for photography, videography, and video and image analysis. This group was terminated in 2015.TV tuner card
A TV tuner card is a kind of television tuner that allows television signals to be received by a computer. Most TV tuners also function as video capture cards, allowing them to record television programs onto a hard disk much like the digital video recorder (DVR) does.
The interfaces for TV tuner cards are most commonly either PCI bus expansion card or the newer PCI Express (PCIe) bus for many modern cards, but PCMCIA, ExpressCard, or USB devices also exist. In addition, some video cards double as TV tuners, notably the ATI All-In-Wonder series. The card contains a tuner and an analog-to-digital converter (collectively known as the analog front end) along with demodulation and interface logic. Some lower-end cards lack an onboard processor and, like a Winmodem, rely on the system's CPU for demodulation.Video Acceleration API
Video Acceleration API (VA API) is a royalty-free API as well as its implementation as free and open-source library (libVA) distributed under the MIT License.
The VA API is to be implemented by device drivers to offer end-user software, such as VLC media player or GStreamer, access to available video acceleration hardware, such as PureVideo (through the libva-vdpau driver, which implements VA API in terms of VDPAU) or Unified Video Decoder.
The API enables and provides access to hardware-accelerated video processing, using hardware such as graphics processing units (GPU) to accelerate video encoding and decoding by offloading processing from the central processing unit (CPU).
VA API video decode/encode interface is platform and window system independent but is primarily targeted at Direct Rendering Infrastructure (DRI) in X Window System on Unix-like operating systems (including Linux, FreeBSD, Solaris), and Android, however it can potentially also be used with direct framebuffer and graphics sub-systems for video output. Accelerated processing includes support for video decoding, video encoding, subpicture blending, and rendering.The VA API specification was originally designed by Intel for its GMA (Graphics Media Accelerator) series of GPU hardware with the specific purpose of eventually replacing the XvMC standard as the default Unix multi-platform equivalent of Microsoft Windows DirectX Video Acceleration (DxVA) API, but today the API is no longer limited to Intel-specific hardware or GPUs. Other hardware and manufacturers can freely use this open standard API for hardware accelerated video processing with their own hardware without paying a royalty fee.Video denoising
Video denoising is the process of removing noise from a video signal. Video denoising methods can be divided into:
Spatial video denoising methods, where image noise reduction is applied to each frame individually.
Temporal video denoising methods, where noise between frames is reduced. Motion compensation may be used to avoid ghosting artifacts when blending together pixels from several frames.
Spatial-temporal video denoising methods use a combination of spatial and temporal denoising. This is often referred to as 3D denoising.It is done in two areas:
They are chroma and luminance, chroma noise is where one see color fluctuations and luminance is where one see light/dark fluctuations. Generally, the luminance noise looks more like film grain while chroma noise looks more unnatural or digital like.Video denoising methods are designed and tuned for specific types of noise.
Typical video noise types are following:
Radio channel artifacts
High frequency interference (dots, short horizontal color lines, etc.)
Brightness and color channel interference (problems with antenna)
Video reduplication – false contouring appearance
Brightness and color channel interference (specific type for VHS)
Chaotic line shift at the end of frame (lines resync signal misalignment)
Wide horizontal noise strips (old VHS or obstruction of magnetic heads)
Film artifacts (see also Film preservation)
Dust, dirt, spray
Curling (emulsion exfoliation)
Blocking – low bitrate artifacts
Ringing – low and medium bitrates artifact especially on animated cartoons
Blocks (slices) damage in case of losses in digital transmission channel or disk injury (scratches on DVD)Different suppression methods are used to remove all these artifacts from video.Video post-processing
The term post-processing (or postproc for short) is used in the video/film business for quality-improvement image processing (specifically digital image processing) methods used in video playback devices, (such as stand-alone DVD-Video players), and video players software and transcoding software. It is also commonly used in real-time 3D rendering (such as in video games) to add additional effects.Video quality
Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in the video signal, which negatively impacts the user's perception of a system. For many stakeholders such as content providers, service providers, and network operators, the assurance of video quality is an important task.
Video quality evaluation is performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (by mathematical models) or subjectively (by asking users for their rating). Also, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services), or in-service (to monitor and ensure a certain level of quality).VirtualDub
VirtualDub is a free and open-source video capture and video processing utility for Microsoft Windows written by Avery Lee. It is designed to process linear video streams, including filtering and recompression. It uses AVI container format to store captured video. The first version of VirtualDub, written for Windows 95, to be released on SourceForge was uploaded on August 20, 2000.In 2009, the third-party software print guide Learning VirtualDub referred to VirtualDub as "the leading free Open Source video capture and processing tool". Due to its "powerful" versatility and usefulness especially in the field of video processing (see below), PC World has referred to VirtualDub as "something of a 'Photoshop' for video files", PC Perspective recommends it for its low overhead, and nextmedia's PC & Tech Authority particularly praises it for its Direct stream copy feature to avoid generational degradation of video quality when performing simple editing and trimming tasks and the fact that VirtualDub "offers several valuable features that other packages lack, and helps you get quick results without any fuss or patronising wizards".VirtualDub is recommended for use by professional computer and tech magazines, guides, and reviewers such as PC World, PC & Tech Authority, PC Perspective, technologies guide website MakeTechEasier, freeware and open source software review site Ghacks, Speed Demos Archive, as well as third-party professional video production companies, video-game developer Valve Corporation, and the creators of Wine (software). Several hundred third-party plug-ins for VirtualDub exist, including by professional software companies. Furthermore, Debugmode Wax allows use of VirtualDub plug-ins in professional video editing software such as Adobe Premiere Pro and Vegas Pro.Zego
The ZEGO ("Zest to go") is a rackmount server platform built by Sony, targeted for the video post-production and broadcast markets. The platform is based on Sony's PlayStation 3 as it features both the Cell Processor as well as the RSX 'Reality Synthesizer'. It is aimed to greatly speed up postproduction work (in particular in the computationally extremely taxing 4K resolution), 3D rendering and video processing. In some respects it is rather similar to IBM's QS20/21/22 blades (such as used in the Roadrunner supercomputer that took the top spot in the Top500 in May 2008), although Sony seems to target the DCC (Digital Content Creation) markets rather than scientific like IBM, which can be seen by the inclusion of the RSX graphics processor in the ZEGO platform.
ZEGO runs Fixstars's Yellow Dog Enterprise Linux, which was also Sony's favourite Linux distribution for the PlayStation 3.
|24 to 30 fps conversion|
|30 to 24 fps conversion|