Download 6210 Exam Questions free enjoy your success

If you need to pass Avaya 6210 exam, has produced Avaya Avaya Aura Contact Center Implementation practice test questions database that will guarantee a person pass 6210 exam! provides a person the valid, Most recent, and 2022 up-to-date 6210 PDF Questions questions and provided a totally Guarantee.

6210 Avaya Aura Contact Center Implementation test | Sun, 23 Aug 2020 07:29:00 -0500jatext/html Implementation of the International Code of Practice on Dosimetry in Diagnostic Radiology (TRS 457): Review of Test Results

INTERNATIONAL ATOMIC ENERGY AGENCY, Implementation of the International Code of Practice on Dosimetry in Diagnostic Radiology (TRS 457): Review of Test Results, IAEA Human Health Reports No. 4, IAEA, Vienna (2011)

Download to:
EndNote BibTeX
*use BibTeX for Zotero

Mon, 13 Mar 2017 06:16:00 -0500 en text/html
Implementation: JavaScript test questions No result found, try new keyword!Which example of an HTML tag would indicate to the browser that code that follows is either a script or will link directly to a JavaScript file? Moving the pointer out of the range of an element e ... Sat, 25 Jul 2020 16:11:00 -0500 en-GB text/html

6210 test - Avaya Aura Contact Center Implementation Updated: 2023

Searching for 6210 test dumps that works in real exam?
Exam Code: 6210 Avaya Aura Contact Center Implementation test June 2023 by team
Avaya Aura Contact Center Implementation
Avaya Implementation test

Other Avaya exams

3002 Avaya IP Office Platform Configuration and Maintenance
7003 Avaya Communication Server 1000 for Avaya Aura Implementation
7220X Avaya Aura Core Components Support (72200X)
6210 Avaya Aura Contact Center Implementation
3312 Avaya Aura Contact Center Administration Exam
3313 Avaya Aura Contact Center Maintenance and Troubleshooting Exam
3314 Avaya Aura Experience Portal with POM Implementation and Maintenance Exam
7497X Avaya Oceana Solution Support Exam
7392X Avaya Aura Call Center Elite Implementation
7492X Avaya Aura Call Center Elite Support
7495X Avaya Oceana Solution Integration
75940X Avaya Converged Platform Integration
76940X Avaya Converged Platform Support
EADC Einstein Analytics and Discovery Consultant
3171T Avaya Enterprise Team Engagement Solutions (APDS)
31860X Avaya IX Calling Design
46150T APSS Avaya Solutions for Midsized Customers
71201X Avaya Aura Core Components Implement Certified
71301X Avaya Aura Communication Applications Implement Certified
78201X Avaya IP Office Platform Support Certified
156-215.81 Check Point Certified Security Administrator R81
44202T Avaya OneCloud UCaaS Sales Specialized Test

We always guide people to study books, study guides and practice tests to Strengthen your know about the subjects but handling tricky question in real 6210 test is somehow difficult. That can be handled by using 6210 dumps questions that consists of real test questions. provide updated, valid and latest 6210 braindumps questions that really works in real test.
Avaya Aura Contact Center Implementation
Question: 58
A customer has purchased Avaya Aura Contact Center (AACC) with the correct
licensing to provide Open Queue session licenses, and to provide agent licenses for the
required multimedia contact types. Where is Open Queue initially enabled in AACC?
A. Contact Center Manager Administration (CCMA) > Configuration, Applications >
LM Service Configuration Setuptab
B. Contact Center License Manager > Configuration > Contact Center Licensing tab
C. CCMS > Multimedia Commissioning > Multimedia Licensing tab
D. Ignition Wizard configuration > Licensing tab
Answer: B
Question: 59
Which three statements about Avaya Aura Contact Center (AACC) Licensing are true?
(Choose three.)
A. Agent licenses are availablefor both Nodal and Corporate Licensing.
B. A Corporate Enterprise license type is for a network ofAvaya Aura Contact Center
C. The Nodal Enterprise license type controls the licensing for a singleAvaya Aura
Contact Center node.
D. The licensing grace period duration is 15 days.
E. Nodal Enterprise licensing supports a Standby License Manager.
Answer: B, C, E
Question: 60
The installation of the Contact Center Manager Administration (CCMA) component
adds default users to the Windows operating system. Which CCMA user accounts are
created during the Avaya Aura Contact Center (AACC) installation?
A. iceAdminIUSR_SWCBkup_SWC
B. AAD_UserAAAC_Adminwebadmin
C. AAAC_AdminIUSR_SWCwebadmin
D. iceAdminIUSR_SWCwebadmin
Answer: C
Question: 61
You have created a new application (script) in Orchestration Designer (OD).
Which configuration steps in OD will allow you to place a test call to the new
A. Select Application Routes > CDNs > Configured Routes > Select Application > Save
B. Select Application > Routes > Add Application > Save
C. Select Call Router > Application Routes> CDNs > Configured Routes > Add > Select
Application > Save
D. Select CDNs > Add Application > Save
Answer: B
Question: 62
Which tool is used to verify the Communication Control Toolkit (CCT) configuration
and to ensure that all resources are available and accessible to route contacts for the
Contact Center Manager Server (CCMS)?
A. Multimedia Dashboard
B. Reference Client
C. Server Utility
D. Server Manager
Answer: B
Question: 63
A systems engineer has just completed a database maintenance backup. The engineer
would like to verify the success of the backup. In which default location should the
engineer look to determine the success of the backup?
A. C:\Contact Center\Logs\Common Components\DBMaintenance.log
B. D:\Logs\Common Components\DBMaintenance.log
C. D:\Avaya Aura\Contact Center\Logs\CommonComponents\CC_DBMaintenance.log
D. D:\Avaya\Logs\Common Components\CC_DBMaintenance
Answer: B
Question: 64
The Avaya Aura Media Server High Availability (HA) feature ensures the
uninterrupted availability of media processing and reduces the loss of processing data
when an AAMS fails. Which three statements regarding the AAMS High Availability
(HA) feature are true? (Choose three.)
A. You can perform a manual failover on the Active AAMS.
B. You cannot a manual failover on the Active AAMS.
C. High Availability (HA) is available only if the AAMS servers are installed on the Red
Hat Enterprise Linux (RHEL) operating system.
D. One AAMS HA pair supports up to 1000 agents, without SIP Call Recording.
Answer: A, C, D
Question: 65
Avaya Aura Contact Center (AACC) uses the media processing capabilities of the
Avaya Aura Media Server (AAMS) to perform functions such as conference customer
and agent speech paths with media treatments. Which three statements regarding AACC
and the AAMS are true? (Choose three.)
A. AAMS is supported on the Windows Server 2012 R2 operating system when
installed co-resident with AACC.
B. AACC does not require a license for each AAMS instance in the solution.
C. AACC integrates withAAMS using Media Server Markup Language (MSML) based
D. AAMS provides a MSML-based service type named ACC_APP_ID.
Answer: A, C, D
For More exams visit
Kill your test at First Attempt....Guaranteed!

Avaya Implementation test - BingNews Search results Avaya Implementation test - BingNews Implementation test questions No result found, try new keyword!Which of the following SQL statements could you use If you want to select the username, e-mail and postcode fields from a table named ‘users’? SELECT users FROM username, e-mail, postcode ... Fri, 10 Mar 2023 12:28:00 -0600 en-GB text/html PSIA releases test tool and implementation guide for PLAI specification No result found, try new keyword!The availability of test tools for PLAI will assure companies a reliable implementation and an integration of their identity management system. Already there are commercial implementations of PLAI, ... Mon, 23 Nov 2015 16:49:00 -0600 text/html Electronic Design Automation (EDA) Sat, 25 Dec 2021 07:22:00 -0600 en-US text/html Implementation of first stage captive firing test of standard type H-II launch vehicle Ground Test Vehicle/Ground System Test (GTV-1)

The National Space Development Agency of Japan (NASDA) is currently
implementing H-IIA Launch Vehicle Ground Test/Ground System Test
(GTV-1) at the Tanegashima Space Center.

The aim of these is to confirm interfaces between the standard-type
H-II A Launch Vehicle and Ground System for establishing launch
readiness. As part of these system tests, First Stage Captive Firing
Test will be conducted as follows:

1.Test Site:H-II Launch Complex, Osaki Range,
Tanegashima Space Center, NASDA

2.Date and Time of Test:Ignition scheduled for 16:00 on
Jun. 20 (Tuesday), 2000

3.Purpose of Test:To simulate countdown, and to confirm the
overall functions and performance of the
first stage captive firing system


Sat, 20 May 2023 12:00:00 -0500 en-US text/html
Developing an Organization Capable of Strategy Implementation and Reformulation: A Preliminary Test No result found, try new keyword!Beer, M., R. A. Eisenstat, and R. Biggadike. "Developing an Organization Capable of Strategy Implementation and Reformulation: A Preliminary Test." Harvard Business ... Mon, 29 Jan 2018 05:35:00 -0600 en text/html A RTCA-DO-254 Compliant Development Process for Supporting the Design of High-Quality Hard IP Cores

By Patricia Lira1,2 and Edna Barros1
1 Federal University of Pernambuco, Informatics Center - UFPE
2 Centro de Tecnologias Estratégicas do Nordeste – CETENE
Recife, Brazil


The increasing structural complexity and the decreasing size of integrated circuits, associated with the reduction on design time, demands that the hardware designer uses a very well planned design flow to obtain high quality IP-cores.

With increasing demand on more complex IP-cores, mechanisms for guaranteeing the design quality are being standardized and must be included in the design flow. The DO-254 standard is an example of such standardization for airborne systems.

In this context, the availability of a Development Process for hardware is an effective support for having defined a set of ordered steps for designing high quality digital systems. The ipPROCESS is a process for designing high-quality Soft IP-cores that has supported the design of IP-cores from its specification until its prototyping in FPGA.  However, quality standard for digital systems includes also the prototyping as ASIC.

This paper proposes an improvement of the current ipPROCESS version, which includes all flows and activities for designing high-quality hard-IP-cores including synthesis, test and layout generation. For this purpose, three news disciplines have been included in the ipPROCESS; the disciplines for Synthesis, for Test and for Back-End. Each discipline includes the workflow of all activities, Roles and Artifacts. For validating this new ipPROCESS release a LCD controller has been designed from this specification until ASIC prototyping. Using the ipPROCESS for guiding the synthesis, test and layout design has reduced the design time in 50%. Additionally the last version of ipPROCESS is 100% compliant with the requirements of DO-252 standard which makes the use of the ipPROCESS very interesting for supporting the design of IP-cores that must be certified according the DO-254 standard.


The complexity of the integrated circuit systems and the enabling integration technologies allow to integrate the whole system into a single chip, called SoC (System-on-Chip). On the other hand, due to the reduced time-to-market, IC design methodologies are using reusable components previously designed, the IP-cores, as an alternative to facilitate the design of a complex and complete system.

Additionally, the increasing structural complexity and the decreasing size of integrated circuits, associated with the reduction on development time, demands that the hardware designer uses a very well planned design flow. Designers have, therefore, increasingly adhered to rigid, rapid and consistent design methodologies that are more amenable to design automation.

In this context, the usage of a Development Process seems to be an effective approach to define the design flow as a set of well defined and ordered steps to guarantee the design high quality integrated circuits.

As mentioned, the current version ipPROCESS supports the design of soft IP-cores with prototyping in FPGA. This work proposes a new version of the ipPROCESS that supports the activities of synthesis, test and layout generation for ASIC prototyping of hard IP-cores.


In this section we briefly discuss some related works in the area of process development for software design and the DO-254 standard for designing critical systems in hardware.

The Rational Unified Process [1][2], RUP is a  process framework developed by the Rational Software Corporation, from IBM, in 2003. The RUP presents an iterative view of good practices to software development and project management, which is being used in thousands projects worldwide.

RUP is composed for six engineering disciplines   modeling, requirements, analysis & design, implementation, test and deployment - organized along four project lifecycle phases – conception, elaboration, construction and transition. A software team that wants to develop a project based on RUP can customize the appropriate process elements that meet their needs.

One of the RUP advantages is to be a process iterative and incremental. The iterative process helps to manage the changes on the project requirements, while the incremental method aids the project team to show previous system version to the client and avoids an implementation that diverge from the real client needs.

Another related approach in the area of software development is the eXtreme Programming (XP) process [3][4]. It was created by Kent Beck, based on his work on the Chrysler Comprehensive Compensation System payroll project[5].  Some development practices purposed by XP are pair programming, test-driven development, continuous integration and a development divided in small releases.

RTCA DO-254[6], Design Assurance Guidance for Airborne Electronic Hardware, is a standard used for design assurance of airborne electronic hardware. The document classifies the design in five design assurance levels A-E, in which the failures in projects with level A represents the bigger security influence.


The ipPROCESS[7][8]  is a process to develop Soft and Hard IP-cores from its specification until its FPGA and ASIC prototyping. It is currently on the version 3.0 and its main goal is the development of high-quality IP-cores that meets the design constraints. To achieve this, the ipPROCESS provides a disciplined approach to assigning tasks and responsibilities within a development organization and is based on Software Engineering techniques, supported by processes such as RUP (Rational Unified Process) [1] and XP (eXtreme Programming) [2]

Figure 1 presents the key concepts to understand the general structure of this process and how the elements as correlated.

Figure 1. ipPROCESS Key Concepts.

The concept of discipline has been incorporated from RUP, in which the discipline is defined as a collection of activities related with an “area of concern” of the project. It is composed of a set of discipline workflows that describes a sequence of activities with begin and end.

Each activity is a work unit executed for a responsible that represents a role in the project. The activity has a set of ordered steps and a set of inputs and outputs artifacts. An activity can be supported by some work guidelines and suggested tools. The Table 1 shows an example of the activity template.

Table 1. Activity template.

Roles are composed by a set of responsibilities and behavior that a person has to assign to execute some activities in the context of the development organization. It can be represented by a single or a group of person.

Artifacts are a work product of the project that has as responsible the role associated to the activity that generated it.

Figure 2 shows the four ipPROCESS phases: conception, architecture, RTL design and prototyping. Each phase is essentially a span of time between two major milestones. At each phase-end, an assessment is performed to determine whether the objectives of the phase have been met. A satisfactory assessment allows the project to move to the next phase.

Figure 2. ipProcess phases.

Figure 3 shows the proposed ipPROCESS Lifecycle Diagram. It indicates the disciplines and the effort done in each discipline along the design’s phases and iterations.

The proposed ipPROCESS is composed by 8 disciplines that support the design of an IP-core starting from requirements capture until the prototyping in FPGA or as ASIC.

The disciplines are described in terms of activities (that group related tasks), workflows, artifacts, and roles, occurring in this order:

Figure 3. ipPROCESS Lifecycle Diagram.

  1. Requirements: includes 5 activities, 7 artifacts, 5 templates, for establishing and maintaining agreement with the customers on what the IP-core should do.
  2. Analysis & Design: includes 3 activities, 8 artifacts, 3 templates for transforming the requirements into a design of the IP-core.
  3. RTL Implementation: includes 6 activities, 4 artifacts and, 2 templates and 1 script for implementing the components, according to the IP-core architecture in some HDL.
  4. Functional Verification: includes 8 activities, 7 artifacts, 3 templates and 1 script for evaluating through functional verification and assessing the IP-core quality, finding and documenting defects.
  5. Synthesis: includes 5 activities,  9 artifacts, 3 templates and two scripts for doing the RTL and logical synthesis for obtaining a net-list according a given technology library.
  6. Prototyping: includes 8 activities, 5 artifacts, 1 template and 1 script for prototyping the IP-core in the FPGA, based on the implementation components.
  7. Test:: includes 5 activities, 10 artifacts, 4 templates and scripts for generates a net-list with test structure and test-vectors.
  8. Back-end: includes 7 activities, 8 artifacts, 1 template and 1 script for generating the layout and encapsulating of the IP-core.

Additionally there are 3 disciplines that support quality assurance and infra-structure support:

  1. Deployment,
  2. Quality Assurance
  3. Configuration & Change Management.

For these disciplines more 28 activities, 29 artifacts, and 19 templates have been defined for supporting the deployment of the IP-core as well activities for guaranteeing the IP-core quality.

This paper focuses on the flow for supporting ASIC design with the development of 3 newer process disciplines: Synthesis, Test and Back-End, which will be detailed described in the next sections.


The Synthesis discipline groups the activities to perform Behavioral Synthesis and Logical Synthesis. These activities are responsibility of the Synthesis Designer and the Synthesis Engineer as shown in the Figure 4.

The Synthesis Designer is responsible for developing the Synthesis Plan. This plan describes all the planning for the synthesis activities like the technology, tools and environment, design constraints, and the order that the components of the system will be synthesized.

To develop this artifact, the Synthesis Designer must considerate design decisions taken in previous parts of the design development, like the non-functional requirements that should be described on the Requirements Specification.

Figure 4. The Synthesis Discipline.

 Once the Synthesis Plan has been defined, the Synthesis Engineer can perform the activities to elaborate the synthesis. These activities are grouped in the following disciplines Workflows:

  • Perform Behavioral Synthesis – indicates the activities to convert a behavioral description of the IP-core elaborated on the Implementation discipline in a RTL description;
  • Perform Logical Synthesis - indicates the activities to converts a RTL description elaborated in a net-list description;
  • Optimize Synthesis – responsible for optimizing the activities performed in the previous mentioned disciplines workflows (Perform Behavioral Synthesis and Perform Logical Synthesis) in order to fulfill the IP-core constraints defined in the Synthesis Plan;
  • Capture Verification Components – its main goal is to group in the Verification Component artifact all the files and directories that enables the Verification.


The Test discipline groups the activities to include Design for Testability (DFT) on the IP-core. These activities are responsibility of the Test Designer and the Test Engineer as shown in the Figure 5.

The Test Designer is responsible for developing the Test Plan. This plan describes all the planning for the test activities such as the kinds of test structures to be insert in the circuit, the constraints (that can be more flexible that these ones to the Logical Synthesis) and the minimum test coverage acceptable.

Figure 5. Test Discipline.

 Once the Test Plan has been defined, the Test Engineer can perform the activities to perform the Test. These activities are grouped in the following Workflows:

  • Insert Test Structures – indicates the activities to converts a netlist description in a netlist with structures to allow DFT;
  • Define Test Vectors – indicates the activities to define the test vectors to perform the Test in a manufactured IP-core, inserting the Test Vectors on the IP-core netlist, as well as  for analyzing its efficiency and performance;
  • Optimize Test - responsible for optimizing the activities performed in the workflow “Insert Test Structures” in order to achieve the design constraints defined on the Test Plan;
  • Capture Verification Components - its main goal is to group in the Verification Component artifact all the files and directories that enables the Verification.


The Back-End discipline groups the activities to generate the layout of the IP-core. These activities are responsibility of the Back-End Designer and the Back-End Engineer, as shown in the Figure 6.

The Back-End Designer is responsible for developing the Back-End Plan. This plan describes all the planning for the back-end activities such as the localization and the kind of the IO PADs, the constraints and the kind of verification that must be done during the Validate Activity.

Figure 6. Back-End Discipline.

Once the Back-End Plan has been defined, the Back-End Engineer can perform the activities to elaborate the Back-End. These activities are grouped on the following Workflows:

  • Perform Floorplanning – the goal of this workflow is to minimize the chip area and delay problems;
  • Perform Placement – it is the workflow activity to place the physical cells, without overlap, with the goal to minimize the connections size;
  • Perform Routing – routing is the stage of physical synthesis which is responsible for defining the metal routes and levels;
  • Validate Layout – the circuit layout has been synthesized, it is necessary to perform some verification to validate its correctness;
  • Perform Packaging – the packaging is responsible to locate the chips in capsules that allow the I/O electrical contacts of integrated circuits to be connected in a conventional circuit board;
  • Capture Verification Components - its main goal is to group in the Verification Component artifact all the files and directories that enables the Verification.


Table 2 shows how the ipPROCESS approach is compliant with DO-254 certification for critical systems. The table includes all artifacts that are required for the DO-254 certification and the artifacts that the ipPROCESS delivers.

Table 2. DO-254 x ipPROCESS.

DO-254 ipProcess
Hardware Plans
Plan for HW Aspects of Certification (PHAC) Instance ipPROCESS
Hardware Design Plan Implementation Plan, Verification Plan, Synthesis Plan, Test Plan, Back-End Plan
HW Verification Plan Verification Plan
HW Configuration Management Plan Configuration Management Plan
Hardware Process Assurance Plan Instance for assurance quality
Hardware Design Data
Hardware Requirements Requirements Specification
Hardware Design Representation Data
Detailed Design Data Design Document
Top-Level Drawing Design Model
Assembly Drawings Deployment Model
Installation Control Drawings IP-core Summary, User Manual
Validation And Verification Data
Hardware Traceability Data Traceability Matrix
Hardware Configuration Management Records Hardware Configuration Items
Hardware Process Assurance Records Hardware Configuration Items
HW Accomplishment Summary ipPROCESS instance report

The PHAC, Hardware Process Plan and Hardware Accomplishment Summary documents are directly related with the DO-254 certification and are under the responsibility of the Quality Assurance ipPROCESS discipline.

This comparison shows that the ipPROCESS can be used for airborne projects that intended obtain the DO-254 certification, covering 100% of the required artifacts.  


To validate the proposed ipPROCESS release, a LCD Controller has been designed from the requirements until the layout synthesis.

The LCD controller IP-core implements all functionalities of an alphanumeric display LCD. It prints 32 characters in two lines, 16 in each, and possesses several functions to manipulate the printed character. Some of its functionalities are to clear display, to return home, to entry mode set, cursor or display shift, to write characters, display On/Off control. The block diagram for the LCD Controller can be seen in Figure 7.

Figure 7. LCD Controller Architecture.

The final layout of the LCD Controller can be seen in Figure 8. Table 3 shows the most important design results.

Table 3. IP-core results.

Data Count
Logic gate count ~ 10000
Transistors ~ 105500
Silicon area ~ 14 mm²

Figure 8. LCD Controller layout.

In the first attempt for design the LCD controller, an older version of the ipPROCESS has been used, that did not include the disciplines for guiding synthesis, test and back-end activities. During this tentative a team of two undergraduate students have worked for four months and they did not have success since the layout had not satisfied the DRC rules.

In the second attempt for design the layout of the LCD controller, the proposed version of the ipPROCESS has been used for guiding the synthesis, test and back-end activities. The design has been successfully done and a team of two undergraduate students have accomplished the design in two months.

When comparing the design using the proposed ipPROCESS version for doing synthesis, test and back-end developed with the proposed methodology has a decrease of 50% of the design time to obtain a final IP-core layout without any violations. Additionally, the IP-core designed using the version of the ipPROCESS is better documented improving the communication among designers.


In this paper, it was presented the extension of the development process, the ipPROCESS, to include also the flow for Hard-IP cores development. This extension includes 3 new disciplines: Synthesis, Test and Back-End, which have a total of 42 new activities, under the responsibility of 6 new roles and generating 27 new process artifacts.
All commercial and military avionics systems that are prototyped in ASIC or FPGA must be complaint with the DO-254. The ipPROCESS is a process that supports the design of IP-cores (ASIC and FPGA) according to the DO-254 standard, with an optimized design time. Depending of the assurance level, it can be adopted a different ipPROCESS instance that covers the standard goals aspects.

The project developed for the LCD with a version using the ipPROCESS and a version without using the ipPROCESS show that, for teams without experience, the use of the ipPROCESS can results not only in high quality IP-cores, but decrease the project development time as well.


[1]    IBM. IBM Rational Unified Process. In Retrieved 22 January 2009.

[2]    Krebs, Jochen (IBM). "The value of RUP certification". 2007.

[3]    eXtrem Programming Team. eXtreme Programming. In Retrieved 22 Janyary 2009.

[4]    Ken Auer and Roy Miller, Extreme Programming Applied: Playing to Win, Addison-Wesley, 2002.

[5]    Fowler, Martin. Chrysler Comprehensive Compensation project. In: <>. Retrieved August 2009

[6]    RTCA. Design Assurance Guidance for Airborne electronic Hardware. 2000, 85 p.

[7]    ipPROCESS, “The ipPROCESS Definition”, LINCS, 2006. |Online|. Available:  <>. Retrieved August 2009.

[8]    Santos, F.; Aziz, A.; Santos, D.; Almeida, M.; Barros, E.. ipProcess: A Usage of an IP-core Development Process to Achieve Time-to-Market and Quality Assurance in a Multi Project Environment. In Proc. Of the IP/SoC, 2008.

Sun, 21 May 2023 12:00:00 -0500 en text/html
IP Core for an H.264 Decoder SoC By Wagston Staehler & Altamiro Susin, UFRGS
Porto Alegre, Brazil


This paper presents the development of an IP core for an H.264 decoder. This state-of-the-art video compression standard contributes to reduce the huge demand for bandwidth and storage of multimedia applications. The IP is CoreConnect compliant and implements the modules with high performance constraints. The modules integrated are: intraframe prediction, inverse transform and quantization and deblocking filter. The integration of this IP with a parser and an entropy decoder as software routines and with a motion compensation hardware IP will result in a complete H264 decoder.

1. Introduction

H.264 is the state-of-the-art video encoding standard. It doubles the bit rate savings provided by MPEG-2, which means a reduction around 100 times of the video stream without subjective decrease of quality. These incredible results are due to an intensive computation of various mathematical techniques. The implementation of such efficient algorithms results in very complex digital systems that are hard to design.

Nevertheless, following the tendencies of complex digital systems design, we must employ a SoC methodology in order to achieve such implementation and to respect time budgets. In this approach, the system is divided in subparts, that are designed independently but using standard interfaces. Good documentation, correct test environment and a well-known standard interface will certainly make the integration of the modules easier.

This paper presents a hardware implementation of an H.264 decoder with a SoC approach. The design was intended to operate over an IBM CoreConnect bus structure. A first part, presenting the modules with high performance constraints, was completely designed and packaged as a PLB IP. The hardware IP was completely designed and prototyped on a Digilent board containing a Virtex II Pro FPGA. A second part, still under construction, is being developed as software components making use of a PowerPC embedded microprocessor. This first version decodes only I-frames of high definition videos, but it still has some performance degradation due to integration process.

2. SoC Methodology

In order to make possible the realization of complex designs meeting a decreasing time-to-market, it is necessary a new approach, called reuse-based design. It means an approach where the designer works on a high level of abstraction, and builds the system with functional modules. These components can be developed by another team or bought from a component provider.

A SoC methodology is used by the system integrator, and it is mainly composed by the following steps [Keating, 2002]: specification, behavioral model, behavioral refinement and test, hardware/software decomposition (partitioning), hardware architecture model specification, hardware architecture model refinement and test (hardware/software co-design) and specification of the implementing blocks.

In this paper a SoC methodology is used to describe the H.264 decoder design. Nevertheless, after the specification of the implementing blocks, it is necessary to develop its modules since they are not available.

2.1 Hardware/Software Co-Design

It is usual to implement a system as cooperating hardware and software modules, i.e. hardware modules working together with a general purpose processor running some software routines. Such approach allows the elaboration of a complete test and debug platform for the hardware development. Once the platform was chosen, it is possible to use the embedded processor to run a monitoring application that can provide inputs to the hardware architecture and observe its behavior on-the-fly, as the hardware was in a real execution situation.

When the hardware is ready to use and after the system software application is written no more monitoring application is needed and the entire system is complete.

3. H.264 Decoder Overview

Digital video storage and transmission demand data compression. Video signals generate a huge amount of data for the available storage space/transmission bandwidth. Then, we need a codec (enCOder/DECoder) pair, where the encoder converts the source information in a compressed form before being transmitted or stored, and the decoder is responsible to convert the compressed information in video information again, ready to be displayed by a raster display.

H.264 algorithm (ITU-T, 2003) was conceived to explore redundancies between successive frames and between blocks within a frame, using inter and intraframe prediction, a DCT-based transform and an entropy mechanism to compress video data. There are a huge number of operations, so processing high definition videos in real-time is only achieved by a hardware implementation.

Figure 1 presents an overview of H.264 decoder. Most important modules are shown: entropy decoder, inverse transform and quantization, intraframe prediction, motion compensation and deblocking filter (ITU-T, 2003). The H.264 encoder is much more complex than the decoder, because it has to explore all the possibilities in a way to find the best choice for every macroblock of every frame. Consequently, the decoder has mechanisms to reconstruct the video content based on parameters sent by the encoder.

Figure 1 – H.264 Decoder Overview

Intraframe prediction exploits spatial redundancy, i.e. it is responsible to reduce the amount of data looking for similarities of regions of a frame. Motion compensation exploits temporary redundancy, i.e. the similarities between regions of sequential frames. In order to avoid big differences between the original frame and the intra or inter predictions, their difference (also called “residue”) is sent too, but encoded by the transform/quantization engine. The resultant stream is passed through an entropy encoder, which exploits statistical redundancy. It set a symbol for each group of data and the more probable is the group smaller will be the symbol (Richardson, 2003). The following subsections present a brief description of each part of the decoder.

Entropy Coding

The motion vectors and transform coefficients are passed for the entropy encoder to obtain a greater reduction on the bit rate. In entropy coding, smaller codes are used to represent more frequently symbols. Therefore, when we process a big quantity of symbols we will save bit rate. There are two types of entropy coding for video processing: Huffman like coding and arithmetic coding. In H.264 standard we have respectively the CAVLC (Context Adaptive Variable Length Coding) and CABAC (Context-based Adaptive Binary Arithmetic Coding) that are selected according the video information being coded in order to obtain the best compression.


The transform used by H.264 is a kind of DCT (Discrete Cosine Transform). The difference is that its coefficients are integers instead of floating point.

Transform is a process to pass the video information to transform domain, where regions that have more energy stand separated from regions that have less energy.


In H.264 standard, it is performed a scalar quantization. It is a function of QP (Quantization Parameter), which is used at the encoder to control quality compression and output bit rate.

Transform coefficients are divided by QP and rounded, what leads all the small coefficients to zero, adding some distortion, but achieving smaller bit rates. The trade-off between the acceptable distortion and the bit rate required must be achieved according the application.

Interframe Prediction

In a video sequence, frames tend to be very similar because of the high sample rate. Normally, there is just one object that moves a little bit or a panorama that moves altogether. In these cases, we code the first frame and for the others we code a motion vector, i.e. a vector to indicate the direction of the movement between the frames. Thus we can reduce temporal redundancy.

Deblocking Filter

Interframe and intraframe predictions are not performed in a whole frame but in areas, called blocks. Therefore, it is possible to detect the best match for each block what leads to the best compression. However, the approach of work over blocks independently creates some boundaries between neighboring blocks and consequently the need to make this transition smoother. The deblocking filter smoothes the edges of the blocks, making its samples closer and providing a better perception of the final image.

Intraframe Prediction

We can always choose between inter or intraprediction depending on the position of the frame in a video sequence and the performance results of each prediction. When intraprediction is chosen by the encoder, this block is transmitted by means of the mode of prediction used plus the residual data (the difference between the original block and the predicted block). Notice that the residue has a lot of zeros and is more easily compressed than the block itself. At the other side of transmission, the decoder takes the mode chosen by the encoder, reconstructs the prediction using the already decoded neighboring and sums with the residue. After the adder module, the decoded block returns to intraprediction module because now it is able to be a reference for future blocks.

4. Prototyping Platform

The prototyping was done using the XUP-V2P development board designed by Digilent Inc (Digilent, 2006). This board has a XC2VP30 Virtex-II Pro FPGA (with two PowerPC 405 processors hardwired), serial port, VGA output and many other interface resources (Xilinx, 2006). It was also fitted with a DDR SDRAM DIMM module of 512MB.

Before prototyping, the system was simulated after place and route synthesis at ModelSim environment[Mentor]. When this preliminary test was passed through then prototyping could be performed.

Xilinx EDK was employed to create the entire programming platform needed to the prototyping process. Each designed architectures blocks was connected to the processor bus and prototyped individually. One of the PowerPC processors was employed as a controller for the prototyped modules, emulating the others blocks of the decoder. The input stimuli were sent to the prototyping system through an RS-232 serial port, using a terminal program running in a host PC. The role of the Host PC was just to send the stimuli collected from the H.264 reference software and collect the results for later comparison to the standard. These prototype produced results were compared to the reference software results, indicating if the prototyped architecture was properly running at full speed on the target device.

A prototype functional validation was made. In this approach, a PowerPC processor system was synthesized with the block under validation and the its stimuli, including the clock signal, were generated by the processor system and send through the processor system bus. The output are sampled directly by the processor system bus. As the clock for the hardware architecture is generated by software in this approach, the processor system could be arbitrarily slow. Figure 2 illustrates this prototyping approach.

Figure 2 – Functional Prototype Verification System

This approach worked fine for the functional prototype validation, but the critical paths of the architecture could not be exercised on the FPGA using this approach due to the slow clock applied. Latter another approach need to be employed to validate the system at full speed.

4.1 IBM CoreConnect Bus Structure

CoreConnect Bus Structure (IBM, 2006) is presented in Figure 3. It follows a canonic SoC design (Keating, 2002), which has a microprocessor and several IPs connected to a bus. We may have an IP for every operation we need, it just has to obey the bus protocol to communicate with other modules. Note that there are two types of bus connected by a bridge, a high speed bus and a low speed bus in order to separate high performance IPs from low performance IPs providing an overall improved performance.

Figure 3 – CoreConnect Bus Structure

The microprocessor is an IBM PowerPC, the high speed bus is called PLB (Processor Local Bus) and the low speed bus is called OPB (On-Chip Peripheral Bus). Current Xilinx FPGAs present two PowerPC microprocessors on-chip, allowing the development of VHDL IPs to be inserted on the FPGA over a CoreConnect structure, including some software components running at PowerPC.

5. Decoder Modules Implementation

Intraframe prediction, deblocking filter and Q-1/T-1 modules were designed and integrated in just one IP component. For a first simplified version of the H.264 decoder, it is able to decode only I frames. Next step consists in incorporating motion compensation module.

In order to conclude this first version, it is still necessary to finish the entropy decoder (CAVLC, for instance) and the parser as a software component, to run it on PowerPC and to make them communicate over CoreConnect bus structure with the VHDL IP.

5.1 Intraframe Prediction

Intraprediction module task is reducing the spatial redundancy over a frame, coding a frame block based on its neighbors, i.e. matching its similarities and coding only its differences. In order to meet these commitments an H.264 codec has several types of intraframe prediction, divided in three groups (Richardson, 2003): luma 4x4, luma 16x16 and chroma 8x8.

The coder aims to find a prediction block that gives the best match with the original one, based on the pixels above and on the left. These various types of predictions allow the coder to perform neighbors’ combinations in a way to find the best prediction no matter the image it is coding. They are able to identify redundancies over a homogeneous or heterogeneous image and to find in which angle the similarities are predominant for each block (4x4 pixels) or macroblock (16x16 pixels) of an image.

Figure 4 presents the 4x4 luma prediction modes. The samples above and on the left (labeled A-M) have previously encoded and reconstructed and are available to form a prediction reference. Arrows are showing in which way neighbors are using to generate the prediction, with mode 2 being a DC mean of the available neighbors. This prediction takes a macroblock, subdivides it in sixteen blocks, and tries to obtain the closest prediction to the original macroblock what produces a smaller residue. Arrows indicate the direction of prediction in each mode.

There are also another four modes for the 16x16 luma prediction. Intraframe prediction for chroma samples is similar to luma 16x16 prediction. However, the blocks are 8x8 instead of 16x16.

One of the innovations brought by the H.264/AVC is that no macroblock (MB) is coded without the associated prediction including the ones from I slices. Thus, the transforms are always applied in a prediction error (Richardson, 2003).

The inputs of the Intra prediction are the samples reconstructed before the filter and the type of code of each MB inside the picture (ITU-T, 2003). The outputs are the predicted samples to be added to the residue of the inverse transform.

Figure 4 – Intraframe Prediction 4x4 Luma Modes

The intraprediction architecture and implementation was divided in three parts, as can be seen at Figure 5: NSB (Neighboring Samples Buffer); SED (Syntactic Elements Decoder); and PSP (Predict Samples Processor). The first part, NSB, stores the neighboring samples that will be used to predict the subsequent macroblocks. The second part, SED, decodes the syntactic elements supplied for the control of the predictor. The third part, PSP, uses the information provided by other parts and processes the predicted samples. This architecture produces four predicted samples each cycle in a fixed order. PSP has a 4 cycles of latency to process 4 samples. See (Staehler, 2006) for a complete description of the intraframe prediction architecture.

Figure 5 – Intraprediction Module Overview

5.2 Deblocking Filter

The deblocking filtering process consists of modifying pixels at the four block edges by an adaptive filtering process. The filtering process is performed by one of the five different standardized filters, selected by mean of a Boundary Strength (BS) calculation. This Boundary Strength is obtained from the block type and some pixel arithmetic to verify if the existing pixel differences along the border are a natural border or an artifact. Figure 6 defines graphically some definitions employed in the Deblocking Filter. From this figure, note that a block is 4x4 pixels samples; the border is the interface between two neighbor blocks; a Line-Of-Pixel is four samples (luma or croma pixel components) from the same block orthogonal to the border; the Current Block (Q) is the block being processed, while the Previews Block (P) is the block already processed (left or top neighbor). The filter operation can modify pixels in both the Previews and the Current blocks. The filters employed in the Deblocking Filter are one-dimensional. The two dimensional behavior is obtained by applying the filter on both vertical and horizontal edges of all 4x4 luma or chroma blocks.

Figure 6 – Filter Structures

The Deblocking Filter input are pixel data and block context information from previews blocks of the decoder and output filtered pixel data according to the standard. Figure 7 shows the block diagram of the proposed Deblocking Filter architecture.

Figure 7 – Deblocking Filter Block Diagram (Agostini, 2006b)

The Edge Filter is a 16 stage pipeline containing both the decision logic and the filters. It operates for both vertical and horizontal edges of the blocks. Due to a block reordering in the Input Buffer, it is possible to connect the Q output to the P input of the edge filter. The Encapsulated filter contains the Edge Filter and the additional buffers for that feedback loop (Agostini, 2006b).

5.3 Inverse Transform and Quantization

The designed architecture for the Q-1 and T-1 blocks is generically presented in Figure 8. It is important to notice the presence of the inverse quantization block between the operations of T-1 block.

As discussed before, the main goal of this design was to reach a high throughput hardware solution in order to support HDTV. This architecture was designed using a balanced pipeline, processing one sample per clock cycle. This constant production rate does depend neither on the input data color type nor on the prediction mode used to generate the inputs. Finally, the input bit width is parameterizable to make easy the integration.

Figure 8 – T-1 and Q-1 block diagram (Agostini, 2006)

The inverse transforms block uses three different two dimensional transforms, according to the type of input data. These transforms are: 4x4 inverse discrete cosine transform, 4x4 inverse Hadamard transform and 2x2 inverse Hadamard transform (Richardson, 2003). The inverse transforms were designed to perform the two dimensional calculations without use the separability property. Then, the first step to design them was to decompose their mathematical definition (Malvar, 2003) in algorithms that does not use the separability property (Agostini, 2006). The architectures designed for the inverse transforms use only one operator at each pipeline stage to save hardware resources. The architectures of the 2-D IDCT and of the 4x4 2-D inverse Hadamard were designed in a pipeline with 4 stages, with a 64 cycles of latency (Agostini, 2006). The 2-D inverse Hadamard was designed in a pipeline with 2 stages, with an 8 cycles latency.

In the inverse quantization architecture, all the internal constants had been previously calculated and stored in memory, saving resources. The designed architecture is composed by a constants generator (tables stored in memory), a multiplier, an adder and a barrel shifter.

A FIFO and other buffers were used in the inverse transforms and quantization architecture to guarantee the desired architectural synchronism. Buffers and FIFO had been designed using registers instead memory.

5.4 Modules Integration

A first step towards a complete H.264 decoder was taken. After design, simulate and prototype each one of the decoder modules, Intra-Prediction, Deblocking Filter and Q-1/T-1 were integrated and packaged in an IP core.

Every module was instantiated as a VHDL component and some glue logic was written. The integrated system was completely simulated and prototyped using the same methodology described in section 5.

The same approach employed for prototype verification of individual decoder blocks was employed to verify the integrated ones. First a post place and route simulation was performed, then the system was prototyped over the Digilent board. When the whole decoder blocks were completely integrated and validated, the processor system could be left outside the system. In this case, the input stimulus is the H.264 bitstream itself and the output is video. At the time of this write, the blocks named Intra Prediction, T-1Q-1, and the Deblocking Filter is already integrated into an IP core. In order to obtain a first version of the H.264 decoder, it will be developed a parser and entropy decoder modules as software routines. Finally, it will be necessary to add an IP core to perform the motion compensation.

5.5 Synthesis Results

Table I shows some synthesis results of the IP core presented targeting Xilinx VP30 FPGA. The frequency of operation attained allows high definition video (HDTV) real-time decoding, 1920x1080 resolution, 11 frames per second. It is able to decode standard definition television (SDTV, 640x480 pixels) at 74 frames per second.

Table I – Synthesis Results

Number of slices 9800 of 13696 (71%)
Number of flip-flops 10057 of 27392 (36%)
Number of 4 input LUTs 11807 of 27392 (43 %)
Clock 34.515 MHz
Throughput 1 samples/cycle

6. Final Considerations

A SoC methodology is the only way to achieve high complexities in a short period of time. Facing a digital system as a SoC composed by IPs, each IP being CoreConnect compliant will accelerate the process of integration and verification, even for a hardware/software co-design. By applying this methodology recursively it is possible to develop an entire H.264 decoder to be embedded in a set-top-box, a receiver for digital TV broadcast.

A SoC approach demands a high effort at specification phase, since the system comprehension and modeling until the system partition into several modules. Using pre-designed IP components that fulfill the application functionalities, by developing the modules or getting them from a third party, is only possible when standard interfaces are defined. This decision will make the integration and verification processes much easier and faster.

This paper presented an implementation of a CoreConnect PLB IP component for an H.264 decoder. It was also given an overview of the project and its integration with the parser, the entropy decoder and the motion compensation module. The architecture is described in VHDL, validated on a FPGA and is intended to be synthesized as an ASIC.


  • Agostini, L., et all, High throughput multitransform and multiparallelism IP directed to the H.264/AVC video compression standard, IEEE Int. Symposium on Circuits and Systems, 2006, pp. 5419-5422.

  • Agostini, L., Azevedo, A. P., Rosa, V, Berriel, E., Santos, T. G., FPGA Design of a H.264/AVC Main Profile Decoder for HDTV. Proceedings of 16th International Conference on Field Programmable Logic and Applications, Madrid, Espanha, 2006.

  • Digilent Inc. Available at Last access: nov/2006.

  • Figueiro, T. H.264 Implementation Test Using the Reference Software. Proceedings of SIBGRAPI, Natal, Brazil. 2005.

  • IBM CoreConnect Bus Structure, available at <>. Last access September, 2006.

  • ITU – International Telecommunication Union, ITU-T Recommendation H.264 (05/03): Advanced Video Coding For Generic Audiovisual Services, 2003.

  • Keating, M., Bricaud, P., Reuse Methodology Manual, Third Edition, Kluwer Academic Publishers, 2002.

  • Malvar, H. et all, Low–Complexity Transform and Quantization in H.264/AVC, IEEE Transactions on Circuits and Systems for Video Tec., v. 13, n. 7, pp. 598–603, 2003.

  • Mentor Graphics. ModelSim. Available at: <>. Last access: Sep/2006.

  • Richardson, I., H.264 and MPEG–4 Video Compression – Video Coding for Next–Generation Multimedia, John Wiley and Sons, 2003.

  • Staehler, W.T., Berriel, E.A., Susin, A.A., Bampi, S. Architecture of an HDTV Intraframe Predictor for an H.264 Decoder, Proceedings of IFIP VLSI-SoC, Nice, France, 2006.

  • Xilinx Inc., Xilinx University Program Virtex-II Pro Development System - Hardware Reference Manual, 2006. Available at

Mon, 24 Jan 2022 10:13:00 -0600 en text/html
Study on advancement of real-time hazard prediction of volcanic eruption and verification test for its social implementation
Principal Investigator (Affiliation)
Advisor MIYAMOTO Kuniaki
R & D Center, Nippon Koei
Research Participant
Associate Prof. TAMEGURI Takeshi
Sakurajima Volcano Research Center, Disaster Prevention Research Institute, Kyoto University
Assistant Prof. YAMADA Taishi
Prediction of Volcanic Eruptions, Sakurajima Volcano Research Center, Disaster Prevention Research Institute, Kyoto University
Project Manager TAJIM Yasuhisa
Advanced Technology Department 2, R & D Center, Nippon Koei
Project Assistant Manager KUKI Kazuhiro
Integrated Information Technology Department, Domestic Consulting Operations, Nippon Koei
International Coordinator
Project Support Researcher IZUMI Mamoru
Disaster Prevention Research Institute, Kyoto University
Research Institutions in Japan Kyoto University / Nippon Koei Co., Ltd.
Cooperating Organization in Japan
Partner Country Republic of Indonesia
Research Institutions in Indonesia PVMBG / BPPTKG
Cooperating organization in Indonesia
Support Organization Kagoshima City / Sleman Regency / Garut Regency / Lumajang Regency / Malan Regency
General Description of the Research Project In this study, we will install a forecasting system of volcanic hazards at volcano observatories of Center for Volcanology and Geological Hazard Mitigation of Indonesia. The system forecasts the amount of eruptive ejecta from observation data and searches an appropriate hazard map from huge database of hazard map constructed from a large number of scenarios and their chains in order to specify the alert area of eruption hazards. Indonesia has 127 active volcanoes, and eruptions occur frequently. Evacuation from hazards due to volcanic eruptions is of utmost importance. We will establish an empirical formula for predicting the amount of ejecta from observation data such as seismic activity and ground deformation prior to the volcanic eruption, based on data on past eruptions and currently being observed. We develop a system to extract the optimal hazard map based on the predicted eruption volume, and install it at observatories at Guntur and Sumeru volcanoes. Observatory staff will be trained in Japan for capacity development.
Links Sakurajima Volcano Research Center
Disaster Prevention Research Institute
JICA ODA見える化サイト

6210 syllabus | 6210 mission | 6210 benefits | 6210 information source | 6210 test | 6210 thinking | 6210 test syllabus | 6210 teaching | 6210 action | 6210 availability |

Killexams test Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams
6210 exam dump and training guide direct download
Training Exams List