This page contains information about our
research papers and Software tools.
Design Verification
J. Campos and H. Al-Asaad,
"A novel mutation-based validation paradigm for high-level hardware
descriptions",
to appear in
IEEE Transactions on VLSI, 2008.
ABSTRACT:
We present a Mutation-based Validation Paradigm (MVP) technology that can
handle complete high-level microprocessor implementations and is based on
explicit design error modeling, design error simulation, and
model-directed test vector generation. We first present a control-based
coverage measure that is aimed at exposing design errors that incorrectly
set control signal values. We then describe MVP.s high-level concurrent
design error simulator that can handle various modeled design errors. We
then present fundamental techniques and data structures for analyzing
high-level circuit implementations and present various optimizations to
speed-up the processing of data structures and consequently speed up MVP.s
overall test generation process. We next introduce a new automatic test
vector generation technique for high-level hardware descriptions that
generates a test sequence by efficiently solving constraints on multiple
finite state machines. To speed up the test generation, MVP is empowered
by learning abilities via profiling various aspects of the test generation
process. Our experimental results show that MVP.s learning abilities and
automated test vector generation effectiveness make MVP significantly
better than random or pseudorandom validation techniques.
J. Campos and H. Al-Asaad, “Circuit profiling
mechanisms for high-level ATPG”, Proc. Microprocessor Test & Verification
Workshop, 2006, pp. 9-14.
ABSTRACT:
Our mutation based validation paradigm (MVP) is a validation environment for high-level
microprocessor implementations. To be able to efficiently generate test sequences, we need
to enable MVP’s ATPG to learn important details of the circuit under validation as a means
to explore critical new circuit scenarios. In this paper, we present new profiling mechanisms
that can exist either as a pre-processor that gathers circuit information prior to the circuit
validation process, or as run-time entities that allow MVP to learn from its progressive experience.
J. Campos and H. Al-Asaad, “Search-Space Optimizations
for High-Level ATPG”, Proc. Microprocessor Test & Verification
Workshop, 2005, pp. 84-89.
ABSTRACT:
Our mutation based validation paradigm (MVP) is a validation environment for
high-level microprocessor implementations. To be able to efficiently
identify and analyze the architectural states (prospect states) that can
possibly satisfy a set of constraints during MVP’s test generation, we need
to reduce the search space in the analysis process as early as possible. In
this paper, we present some optimizations in the search space that speed up
the overall test generation process.
J. Campos and H. Al-Asaad, “MVP: A Mutation-Based
Validation Paradigm”, Proc. International High-Level Design
Validation & Test Workshop, 2005, pp. 27-34.
ABSTRACT: A mutation-based
validation paradigm that can handle complete high-level microprocessor
implementations is presented. First, a control-based coverage measure is
presented that is aimed at exposing design errors that incorrectly set
control signal values. A method of automatically generating a complete set
of modeled errors from this coverage metric is presented such that the
instantiated modeled errors harness the rules of cause-and-effect that
define mutation-based error models. Finally, we introduce a new automatic
test pattern generation technique for high-level hardware descriptions that
solves multiple concurrent constraints and is empowered by concurrent
programming.
H. Arteaga and H. Al-Asaad,
“On Increasing the Observability of Modern Microprocessors”, Proc.
International Conference on Computer Design (CDES), 2005, pp. 91-96.
ABSTRACT: Microprocessors are
becoming increasingly complex and difficult to debug. Researchers are
constantly looking for new methods to increase the observability and
controllability of microprocessors. This paper introduces a new method to
improve the observability of modern microprocessors and thus simplifying the
task of debugging them. The method revolves around an observation circuit
that provides access to important internal signals without interrupting the
microprocessor execution. The output of the observation circuit is ported to
the output of the microprocessor in order to easily detect various physical
faults and design errors. Experimental results show that physical faults and
design errors are detected faster using our method. Moreover, several errors
are detected by the observation circuit without being detected by the
microprocessor outputs.
J. Campos and H. Al-Asaad, “Mutation-Based Validation of
High-Level Microprocessor Implementations”, Proc.
International High-Level Design Validation and Test Workshop, 2004, pp.
81-86.
ABSTRACT : In this paper we present
a preliminary method of validating a high-level microprocessor
implementation by generating a test sequence for a collection of abstract
design error models that can be used to compare the responses of the
implementation against the specification. We first introduce a general
description of the abstract mutation-based design error models that can be
tailored to span any coverage measure for microprocessor validation. Then we
present the clustering-and-partitioning technique that single-handedly makes
the concurrent design error simulation of a large set of design errors
efficient and allows for the acquisition of statistical data on the
distribution of design errors across the design space. We finally present a
method of effectively using this statistical information to guide the ATPG
efforts..
J. Campos and H. Al-Asaad,
“Concurrent Design Error
Simulation for High-Level Microprocessor Implementations”,
Proc. Autotestcon, 2004, pp. 382-388.
ABSTRACT : A high-level concurrent
design error simulator that can handle various design error/fault models is
presented. The simulator is a vital building block of a new promising method
of high-level testing and design validation that aims at explicit design
error/fault modeling, design error simulation, and model-directed test
pattern generation. We first describe how signals are represented in our
concurrent fault simulation and the method of performing operations on these
signals. We then describe how to handle the challenges in executing
conditional statements when the signals used by the statements are augmented
by an error/fault list. We further describe the method in which the error
models are embedded into the simulator such that the result of a concurrent
simulation matches that of a sequence of HDL simulations with the set of
errors/faults inserted manually one by one. We finally demonstrate the
application of our con-current design error simulator on a typical Motorola
microprocessor. Our simulator was able to detect all detectable and modeled
design errors/faults for a given test sequence and was able to reveal
valuable information about the behavior of erroneous designs.
H. Arteaga and H. Al-Asaad, “Approaches for Monitoring
Vectors on Microprocessor Buses”, Proc. International Conference on VLSI,
2004, pp. 393-398.
ABSTRACT : This paper introduces
two new methods for observing and recording the vectors that have been
asserted on a bus. The first is a software approach that uses a novel data
structure similar to binary decision diagrams which allows for a compact
representation of stored values. Even though the new data structure
presented in this paper can potentially grow to contain just as many nodes
as there are possible values, such cases are often rare. The second is a
hardware approach that is based on a simple circuit consisting of a small
memory and two counters and has the ability to perform at the speed of the
microprocessor.
H. Al-Asaad and J. P. Hayes,
"Logic
design verification via simulation and automatic test pattern generation",
Journal of Electronic Testing: Theory and Applications, Vol. 16, No.
6, pp. 575-589, December 2000.
ABSTRACT : We investigate an automated
design validation scheme for gate-level combinational and sequential
circuits that borrows methods from simulation and test generation for
physical faults, and verifies a circuit with respect to a modeled set of
design errors. The error models used in prior research are examined and
reduced to five types: gate substitution errors (GSEs), gate count errors (GCEs),
input count errors (ICEs), wrong input errors (WIEs), and latch count errors
(LCEs). Conditions are derived for a gate to be testable for GSEs, which
lead to small, complete test sets for GSEs; near-minimal test sets are also
derived for GCEs. We analyze undetectability in design errors and relate it
to single stuck-line (SSL) redundancy. We show how to map all the foregoing
error types into SSL faults, and describe an extensive set of experiments to
evaluate the proposed method. These experiments demonstrate that high
coverage of the modeled errors can be achieved with small test sets obtained
with standard test generation and simulation tools for physical faults.
H. Al-Asaad and J. P. Hayes,
"ESIM: A
multimodel design error and fault simulator for logic circuits",
Proc. IEEE VLSI Test Symposium , 2000, pp. 221-228.
ABSTRACT : ESIM is a simulation tool that integrates
logic fault and design error simulation for logic circuits. It targets
several design error and fault models, and uses a novel mix of simulation
algorithms based on parallel-pattern evaluation, multiple error activation,
single fault propagation, and critical path tracing. Several experiments are
discussed to demonstrate the power of
ESIM.
H. Al-Asaad, J. P. Hayes, and T. Mudge,
"Modeling
and detecting control errors in microprocessors",
Proc. International Congress on Dynamics and Control of Systems, pgs 8,
1999.
ABSTRACT : Design validation for microprocessors based
on modeling design errors and generating tests for them is discussed. An
error model for control errors is introduced and validated experimentally
for a small microprocessor. A general validation approach using this model
is outlined. Preliminary experimental results suggest that high coverage of
control as well as data errors can be achieved using our approach.
H. Al-Asaad,
Lifetime
Validation of Digital Systems via Fault Modeling And Test Generation,
Ph.D. Dissertation, University of Michigan, Ann Arbor, September 1998.
ABSTRACT : The steady growth in the complexity of
digital systems demands more efficient algorithms and tools for design
verification and testing. Design verification is becoming increasingly
important due to shorter design cycles and the high cost of system failures.
During normal operation, digital systems are subject to operational faults,
which require regular on-line testing in the field, especially for
high-availability and safety-critical applications. Fabrication fault
testing has a well-developed methodology that can, in principle, be adapted
for efficient design validation and on-line testing. This thesis
investigates a comprehensive "lifetime" validation approach that uses
fabrication fault testing and simulation techniques, and accounts for design
errors, fabrication faults, and operational faults. The validation is
achieved by the following sequence of steps: (1) explicit error and fault
modeling, (2) model-directed test generation, and (3) test application.
We first present a hardware design validation methodology that follows the
foregoing validation approach. We analyze the gate-level design error models
used in prior research and show how to map them into single stuck-line (SSL)
faults. We then describe an extensive set of experiments, which demonstrate
that high coverage of the modeled gate-level errors can be achieved with
small test sets obtained with standard test generation and simulation tools
for fabrication faults. Due to the absence of published error data, we have
systematically collected design errors from a number of microprocessor
design projects, and used them to construct high-level error models suitable
for design validation. Experimental results indicate that very high coverage
of actual design errors can be obtained with test sets that are complete for
a small number of design error models. We further present a new error model
for control errors in microprocessors and a validation approach that uses
it.
We next show how to achieve built-in validation by embedding the test
application mechanism within the circuit under test (CUT). This is realized
by built-in self-test (BIST), a design-for-testability technique that places
the testing functions physically within the CUT. We demonstrate how BIST,
which in the past has been typically used only for fabrication faults, can
be applied to on-line testing. On-line BIST can provide full error coverage,
bounded error latency, low hardware and time redundancy. We present a method
for the design of efficient test sets and test generators for BIST,
especially for high-performance scalable datapath circuits. The resultant
test generator designs meet the following goals: scalability, small test set
size, full fault coverage, and very low hardware overhead. We apply our
method to various datapath circuits including a carry-lookahead adder, an
arithmetic-logic unit, and a multiplier-adder.
D. Van Campenhout, H. Al-Asaad, J. P. Hayes, T. Mudge, and R.
Brown, "High-level
design verification of microprocessors via error modeling", ACM
Transactions on Design Automation of Electronic Systems, Vol. 3, No. 4,
pp. 581-599, October 1998.
ABSTRACT : A design verification methodology for
microprocessor hardware based on modeling design errors and generating
simulation vectors for the modeled errors via physical fault testing
techniques is presented. We have systematically collected design error data
from a number of microprocessor design projects. The error data is used to
derive error models suitable for design verification testing. A class of
basic error models is identified and shown to yield tests that provide good
coverage of common error types. To improve coverage for more complex errors,
a new class of conditional error models is introduced. An experiment to
evaluate the effectiveness of our methodology is presented. Single actual
design errors are injected into a correct design, and it is determined if
the methodology will generate a test that detects the actual errors. The
experiment has been conducted for two microprocessor designs and the results
indicate that very high coverage of actual design errors can be obtained
with test sets that are complete for a small number of synthetic error
models.
H. Al-Asaad, D. Van Campenhout, J. P. Hayes, T. Mudge, and R.
Brown, "High-level
design verification of microprocessors via error modeling", Digest of
Papers: IEEE International High-Level Design Validation and Test Workshop,
1997, pp. 194-201.
ABSTRACT : A project is under way at the University of
Michigan to develop a design verification methodology for microprocessor
hardware based on modeling design errors and generating simulation vectors
for the modeled errors via physical fault testing techniques. We have
developed a method to systematically collect design error data, and gathered
concrete error data from a number of microprocessor design projects. The
error data are being used to derive error models suitable for design
verification testing. Design verification is done by simulating tests
targeted at instances of the modeled errors. We are conducting experiments
in which targeted tests are generated for modeled errors in circuits ranging
from RTL combinational circuits to pipelined microprocessors. The
experiments gauge the quality of the error models and explore test
generation for these models. This paper describes our approach and presents
some initial experimental results.
H. Al-Asaad and J. P. Hayes, "Design
verification via simulation and automatic test pattern generation",
Proc. International Conference on Computer-Aided Design, 1995, pp.
174-180.
ABSTRACT : We present a simulation-based method for
combinational design verification that aims at complete coverage of
specified design errors using conventional ATPG tools. The error models used
in prior research are examined and reduced to four types: gate substitution
errors (GSEs), gate count errors (GCEs), input count errors (ICEs), and
wrong input errors (WIEs). Conditions are derived for a gate to be
completely testable for GSEs; These conditions lead to small test sets for
GSEs. Near-minimal test sets are also derived for GCEs. We analyze
redundancy in design errors and relate this to single stuck-line (SSL)
redundancy. We show how to map all the foregoing error types into SSL
faults, and describe an extensive set of experiments to evaluate the
proposed method. Our experiments demonstrate that high coverage of the
modeled design errors can be achieved with small test sets.
BIST and On-Line Testing
H. Al-Asaad,
"Efficient global fault collapsing for combinational library modules" , Proc. International Conference on Computer Design
(CDES), 2007, pp. 37-43.
ABSTRACT:
Fault collapsing is the process of reducing the number of faults by using
redundance and equivalence/dominance relationships among faults. Exact
global fault collapsing can be easily applied locally at the logic gates,
however, it is often ignored for library modules due to its high demand of
resources such as execution time and/or memory. In this paper, we present
an efficient and exact global fault collapsing method for library modules
that uses both binary decision diagrams and fault simulation with random
vectors. Experimental results show that the new method reduce the number
of faults drastically with feasible resources and produce significantly
better results than existing approaches.
H. Al-Asaad, "AGFC: An approximate simulation-based
global fault collapsing tool for combinational circuits" ,
Proc. International conference on Circuits, Signals, & Systems, 2006, pp. 248-253.
ABSTRACT:
Exact global fault collapsing can be easily applied locally at the logic
gates, however, it is often ignored for large circuits due to its high
demand of execution time and/ or memory. In this paper, we present AGFC, an
approximate global fault collapsing tool for combinational circuits.
Experimental results show that (i) AGFC reduces the number of faults
drastically with feasible resources and (ii) AGFC produces significantly
better results than existing approaches.
H. Al-Asaad and P. Moore,
"Non-concurrent on-line testing
via scan chains", Proc. Autotestcon, 2006, pp. 683 - 689.
ABSTRACT:
With operational faults becoming the dominant cause of failure modes in
modern VLSI, widespread deployment of on-line test technology has become
crucial. In this paper, we present a non-concurrent on-line testing
technique via scan chains. We discuss the modifications needed in the design
so that it can be tested on-line using our technique. We demonstrate our
technique on a case study of a pipelined 8x8 multiply and accumulate unit.
The case study shows that our technique is characterized by high error
coverage, moderate hardware overhead, and negligible time redundancy.
H. Al-Asaad, “EGFC:
An Exact Global Fault Collapsing Tool for Combinational Circuits”,
Proc. International conference conference on Circuits, Signals, and Systems, 2005,
pp.56-61.
ABSTRACT :Fault collapsing is the
process of reducing the number of faults by using redundance and
equivalence/dominance relationships among faults. Exact fault collapsing can
be easily applied locally at the logic gates, however, it is often ignored
for most circuits, due to its high demand of resources such as execution
time and/or memory. In this paper, we present EGFC, an exact global fault
collapsing tool for combinational circuits. EGFC uses binary decision
diagrams to compute the tests for faults and consequently achieve efficient
global fault collapsing. Experimental results show that EGFC reduces the
number of faults drastically with feasible resources.
H. Al-Asaad, G. Valliappan, and L. Ramirez,
“A Novel Functional Testing and Verification
Technique for Logic Circuits”, Proc. International Conference on
Computer Design (CDES), 2005, pp. 129-135.
ABSTRACT : Functional verification
plays a key role in the design verification cycle and the physical fault
testing process. There are several functional verification methods that
generate tests for modules independent of their implementation; however,
these methods do not scale well for medium to large circuits. In this paper
we introduce a new implementation-independent functional test generation
technique that extracts a good set of functional vectors that are
characterized by a small number of neighbors. Two input vectors of a
function are considered neighbors if they produce the same output value of
the function and the Hamming distance between them is one. Our method can be
easily implemented and it generates tests by selecting input vectors that
have fewer neighbors among all input vectors. Our experimental results
demonstrate that our generated tests are significantly better than random
tests. Moreover, our method can handle multiple-output circuits, and can be
easily scaled to target large designs.
H. Al-Asaad and R. lee,
"Simulation-Based Approximate Global Fault Collapsing", Proc.
International Conference on VLSI, 2002, pp. 72-77.
ABSTRACT : To generate tests for a
digital circuit, the test generation tool is initially provided with the
circuit description in a netlist format and then it creates a list of faults
that need to be targeted for detection. For large circuits, the number of
faults can become very large. It is thus beneficial to minimize the number
of faults whenever possible. Fault collapsing is the process of reducing the
number of faults by using equivalence and dominance relationships among
faults. Exact fault collapsing can be easily applied locally at the logic
gates, however, it is not feasible to apply it globally for large circuits.
In this paper, we present an approximate global fault collapsing technique
that is based on the simulation of random vectors. Experimental results show
that our method reduces the number of faults drastically with feasible
resources.
H. Al-Asaad and Mayank Shringi,
"On-line built-in self-test for operational faults", Proc.
Autotestcon, 2000, pp. 168-174.
ABSTRACT : On-line testing is fast
becoming a basic feature of digital systems, not only for critical
applications, but also for highly-available applications. To achieve the
goals of high error coverage and low error latency, advanced hardware
features for testing and monitoring must be included. One such hardware
feature is built-in self-test (BIST), a technique widely applied in
manufacturing testing. We present a practical on-line periodic BIST method
for the detection of operational faults in digital systems. The method
applies a near-minimal deterministic test sequence periodically to the
circuit under test (CUT) and checks the CUT responses to detect the
existence of operational faults. To reduce the testing time, the test
sequence may be partitioned into small sequences that are applied
separately—this is especially useful for real-time digital systems. Several
analytical and experimental results show that the proposed method is
characterized by full error coverage, bounded error latency, moderate space
and time redundancy.
H. Al-Asaad, B. T. Murray, and J. P. Hayes,
"On-line
BIST for embedded systems", IEEE Design & Test of Computers, Vol.
15, No. 4, pp. 17-24, November 1998.
ABSTRACT : We survey on-line testing techniques in
embedded systems and evaluate them based on the following parameters: error
coverage, error latency, space redundancy, and time redundancy. We then
discuss on-line BIST methods that provide full error coverage, bounded error
latency, low space and time redundancy, and impose no redesign constraints
on the circuit under test. Finally, we discuss a recently designed
commercial microprocessor for critical applications that incorporates most
of the methods considered.
H. Al-Asaad, J. P. Hayes, and B. T. Murray,
"Scalable
test generators for high-speed datapath circuits", Journal of
Electronic Testing: Theory and Applications, Vol. 12, Nos. 1/2,
February/April 1998.
ABSTRACT : This paper explores the design of efficient
test sets and test-pattern generators for on-line BIST. The target
applications are high-performance, scalable datapath circuits for which fast
and complete fault coverage is required. Because of the presence of carry-lookahead,
most existing BIST methods are unsuitable for these applications. High-level
models are used to identify potential test sets for a small version of the
circuit to be tested. Then a regular test set is extracted and a test
generator TG is designed to meet the following goals: scalability, small
test set size, full fault coverage, and very low hardware overhead. TG takes
the form of a twisted ring counter with a small decoder array. We apply our
technique to various datapath
circuits including a carry-looka head adder, an arithmetic-logic unit, and a
multiplier-adder.
H. Al-Asaad, J. P. Hayes, and B. T. Murray,
"Design
of scalable hardware test generators for on-line BIST", Digest of
Papers: IEEE International On-Line Testing Workshop, 1996, pp. 164-167.
ABSTRACT : This paper briefly reviews on-line built-in
self-test (BIST) and shows its importance in concur rent checking. Then a
new approach for the design of deterministic BIST hardware test generators
is presented. The approach uses high-level models of circuits to identify
the classes of tests needed for complete coverage of faults. The test
generator is then designed with the following goals: scalability,
near-minimal error latency, and complete coverage of the modeled faults.
Moreover, the test generators produced are simple and have low hardware
overhead. Preliminary case studies of carry-lookahead adders, arithmetic
logic units, and barrel shifters show the usefulness of this technique.
H. Al-Asaad and E. Czeck,"Concurrent error correction in
iterative circuits by recomputing with partitioning and voting", Proc.
IEEE VLSI Test Symposium, 1993, pp. 174-177.
ABSTRACT : This paper presents a novel technique for
the design of iterative circuits with concurrent error correction
capabilities. The new method is called "recomputing with partitioning and
voting" (RWPV). It uses a combination of hardware and time redundancy to
achieve fault tolerance while providing the same error correction
capabilities as found in hardware TMR or time redundancy computation. RWPV
error correction is obtained with small hardware and time overhead, as com
pared to over 200% overhead in either hardware or time for TMR or time
redundancy.
Fault Tolerant Computing (Reconfiguration)
H. Al-Asaad, “A novel Markov model for the
reliability prediction of fault tolerant non-homogenous multipipelines”,
Proc. Autotestcon, 2003, pp. 664-669.
ABSTRACT : A novel Markov model for the reliability
prediction of fault-tolerant non-homogenous VLSI and WSI multipipeline
arrays is presented. The PEs of the array are assumed to fail independently
(with a constant failure rate) at different moments and the transition rate
between two different error states is constant. A total system failure is
reached when the number of working pipelines becomes less than a
predetermined number Sm. Thus the reliability of the multipipeline array is
defined as the probability of having S(t) greater than or equal to Sm, where
S(t) is the number of survived pipelines at time t, and Sm is the minimum
number of survived pipelines that is needed for the multipipeline to be
considered in a working condition. In addition to predicting the
reliability, the Markov model can be used in design optimization to
determine the best possible design among multiple alternatives. Several
experiments are conducted that demonstrate the ability of the proposed
Markov model to predict the reliability and to evaluate various design
alternatives.
H. Al-Asaad and A. Sarvi, “Fault tolerance for
multiprocessor systems via time redundant task scheduling”, Proc.
International Conference on VLSI, 2003, pp. 51-57.
ABSTRACT : Fault tolerance is often considered as a
good additional feature for multiprocessor systems but nowadays it is
becoming an essential attribute. Fault tolerance can be achieved by the use
of dedicated customized hardware that may have the disadvantage of large
cost. Another approach to fault tolerance is to exploit existing redundancy
in multiprocessor systems via a task scheduling software strategy based on
time redundancy. Time redundancy reduces the expense of additional hardware
needed to achieve fault tolerance at the expense of additional computation
time, which is more affordable. In this paper we present a general-purpose
time redundant task-scheduling scheme for real time multiprocessor systems
that is capable of tolerating various hardware and software faults. Our
experimental simulation results show that our technique is highly effective,
feasible, and promising.
H. Al-Asaad, M. Vai, and James Feldman,
"Distributed
reconfiguration of fault tolerant VLSI multipipeline arrays with constant
interstage path lengths", Proc. International Conference on Computer
Design, 1994, pp. 75-78.
ABSTRACT : A new fault tolerant Multipipeline
architecture and its diagnosis/reconfiguration algorithm will be presented.
This multipipeline arrays design methodology is characterized by constant,
fault distribution independent interstage path lengths. Other features
include a low hardware overhead and a high survival rate when it is compared
to existing approaches.
H. Al-Asaad and E. S. Manolakos,"A
two-phase reconfiguration algorithm for VLSI and WSI linear arrays out of
two-dimensional architectures", Proc. IEEE International Workshop on
Defect and Fault Tolerance in VLSI Systems, 1993, pp. 56-63.
ABSTRACT : In order to maintain constant
interconnection wire lengths between logically adjacent cells and avoid
introducing additional tracks of buses and switches when linear arrays are
extracted out of two-dimensional architectures with faulty processing
elements, the "spiral" reconfiguration approach has been introduced. Its
main drawback, relative to the tree and patching approaches, is that it
leads to low harvesting. In this paper we introduce a two-phase
reconfiguration strategy that drastically increases the harvesting ratio.
The algorithm of the first phase achieves comparable harvesting to the
previously proposed schemes, while it is simpler and can be implemented by
on-chip logic. The algorithm of the second phase may complement any other
scheme used during the first phase, and raises the harvesting ratio to
levels that could be achieved by the much more involved tree approach.
H. Al-Asaad,
On the
Design of Fault-Tolerant VLSI and WSI Non-Homogenous Multipipelines,
M.S. Thesis, Northeastern University, Boston, September 1993.
ABSTRACT : Multipipelines are currently used in many
areas such as signal processing and image processing architectures as well
as in general purpose vector computers. These pipelines are formed of
several stages with different functionalities. The main objective of the
multipipeline design is to reduce the effects of faults by having
fault-tolerant design. In this thesis we present a new design for
multipipelines -- a new architecture, diagnosis, and reconfiguration
algorithm. The design is characterized by the unity length interconnect
between the stages of pipelines independent of the fault distribution, a low
hardware overhead compared to other designs, and a number of survived
pipelines comparable to other approaches.
H. Al-Asaad and M. Vai, "A real time reconfiguration
algorithm for VLSI and WSI arrays", Proc. IEEE International Workshop on
Defect and Fault Tolerance in VLSI Systems, 1992, pp. 52-59.
ABSTRACT : Reliability is an important issue in the
real-time operations of VLSI array processors. A new algorithm for the
real-time reconfiguration of VLSI and WSI arrays is presented. This
algorithm is characterized by its simplicity and locality. The control of
this reconfiguration scheme is implemented in hardware for a real time
execution. It supports multiple faults including transient/ intermittent
faults with a zero degradation time. Simulation results show that a good
spare utilization rate is achieved with a computational complexity that is
independent of the array size.
Low Power Design
S. Aldeen and H. Al-Asaad, "A new method for power estimation and optimization of combinational
circuits", to appear in Proc. International Conference
on Microelectronics, 2007.
ABSTRACT:
One of the challenges of low power methodologies for digital systems
is saving power consumption in these systems without compromising
performance. In this paper we propose a new method for estimating dynamic
power consumption in combinational circuits. The method enables us to
optimize the power consumption of typical combinational circuits.
A. Sayed and H. Al-Asaad, "A new
statistical approach for glitch estimation in combinational circuits", Proc. International Symposium on Circuits and
Systems, 2007, pp. 1641-1644.
ABSTRACT:
Low-power consumption has become a highly important concern for synchronous standard-cell
design, and consequently mandates the use of low-power design methodologies and techniques.
Glitches are not functionally significant in synchronous designs, but they consume a lot of power.
By reducing glitching activity, we can reduce the dominant term in the power consumption of CMOS
digital circuits. In this paper, we present a new method to estimate the glitching activity for
different circuit nodes. The method is robust and produces accurate glitch probability numbers early
in the design cycle. It does not have much overhead and it alleviates existing compute-intensive algorithms/methods.
A. Sayed and H. Al-Asaad, "A new low power high
performance flip-flop", Proc. International Midwest Symposium on
Circuits and Systems, 2006.
ABSTRACT: Low power flip-flops are
crucial for the design of low-power digital systems. In this paper we delve
into the details of flip-flop design and optimization for low power. We
compare the lowest power flip-flops reported in the literature and introduce
a new flip-flop that competes with them.
A. Sayed and H. Al-Asaad, "Survey and evaluation of
low-power flip-flops", Proc. International Conference on Computer Design
(CDES), 2006, pp. 77-83.
ABSTRACT: We survey a set of
flip-flops designed for low power and high performance. We highlight the
basic features of these flip-flops and evaluate them based on timing
characteristics, power consumption, and other metrics.
A. Sayed and H. Al-Asaad, “Survey and Evaluation of
Low-Power Full-Adder Cells”, Proc. International Conference on VLSI,
2004, pp. 332-338.
ABSTRACT : In this paper, we survey
various designs of low-power full-adder cells from conventional CMOS to
really inventive XOR-based designs. We further describe simulation
experiments that compare the surveyed full-adder cells. The experiments
simulate all combinations of input transitions and consequently determine
the delay and power consumption for the various full-adder cells. Moreover,
the simulation results highlight the weaknesses and the strengths of the
various full-adder cell designs.
H. Al-Asaad, The Error
Simulator ESIM.
H.Al-Asaad, Approximate Global Fault Collapsing, AGFC.
H.Al-Asaad, Exact Global Fault Collapsing, EGFC. |