Кафедра автоматизації проектування обчислювальної техніки (АПОТ)
Постійний URI для цієї колекції
Перегляд
Перегляд Кафедра автоматизації проектування обчислювальної техніки (АПОТ) за датою видання
Зараз показано 1 - 20 з 506
Результатів на сторінку
Варіанти сортування
Публікація Проектирование цифровых систем с использованием языка VHDL(Харьковский национальный университет радиоэлектроники, 2003) Семенець, В. В.; Хаханова, И. В.; Хаханов, В. И.Публікація Reproducing kernel hilbert space methods for cad tools(EWDTW, 2004) Chumachenko, S. V.; Khawar, Parvez; Gowher, MalikThe review of known RKHS-methods for analysis of current state in science investigations is represented. The place of Series Summation Method in Reproducing Kernel Hilbert Space (RKHS) is determined. The new results obtained by this method are discussed. Reproducing Kernel Hilbert Space (RKHS) methods are interesting both pure theoretically and applied. RKHS theory has been a well studied topic, stemming from the original works of [1] to more recent studies on their application by [2, 3, 8-11]. Mathematical models based on RKHS and causal operators are presented in [3]. They are used at Pattern Recognition [4], Digital Data Processing [5], Image Compression [6], Computer Graphics [7]. Mentioned directions are described by mathematical tool – theory of wavelets [4]. RKHS methods are base tool in exact incremental learning [8], in statistical learning theory [2, 9]. The general theory of reproducing kernels which is combined with linear mappings in the framework of Hilbert spaces is considered in [2]. A framework for discussing the generalization ability of a trained network in the original function space using tools of functional analysis based on RKHS is introduced in [8]. Special kind of kernel based approximation scheme is also closely linked to regularization theory [10] and Support Vector Machines based approximation schemes [11] (Fig.).Публікація Verification tests generation features for microprocessor- based structures(EWDTW, 2004) Krivoulya, G. F.; Shkil, A. S.; Syrevitch, Ye.; Antipenko, O.A model of a microprocessor - based device as a bichromatic multidigraph with vertexes of two types is offered. Test generation features for functional testing using the updated algorithm of path activation in a structural model are described. The range method of data representation of different format data is introduced. Algorithms for execution of direct implication and backtracing of different types of operations and their program realization are represented. All set of methods of the determined test generation for digital devices can be divided into two large groups: structural and functional. Originally structural methods were oriented to a gate level of model performance of digital devices. However growth of complexity and rise of a component integration have led to a fact that models of increased integration elements began to be applied as the primitive elements (PE) of devices [1,2]. To the advantages of such approach it is possible to refer simple construction of a model of the device and formalizing of test generation procedures, and to the lacks - large dimension of a device model; and difficulties on creation and maintaining of the library of PE models, which can contain hundreds components. With the purpose of overcoming these lacks the functional approach to construction of the tests was developed and has received a wide circulation [3, 4]. It can be used for digital devices of any complexity, including microsystems with program and microprogram control, as it allows receiving high level models of such devices. However functional methods are badly formalized because different types of function boxes, such as control block, operational block, address block etc. are present in microsystems. It is not obviously possible to formalize the method, which would have a possibility to handle so heterogeneous types of devices on the basis of the uniform approach. In the given work the method of tests generation which is further development of the functional approach is offered. On a design stage of the digital device its decomposition on so-called homogeneously tested segments is carried out. The authors consider a method of tests generation for one of types of segments, namely, for the operational device (OD).Публікація Co-design technology of soc based on active-HDL 6.2(EWDTW, 2004) Hyduke, S.; Yegorov, A. A.; Guz, O. A.; Hahanova, I. V.It is represented technology of designing and verification of digital systems-on-a-chip (SoC), based on the experience of design of hardware and software components of SoC in one environment. It reflects today situation of variety of available silicon, software and hardware description languages, design tools. There are also presented recommendations and examples. On today’s EDA market there are 3 major target silicon technologies that define computer world today – programmable devices, gate arrays and ASICs. They and relations between them are presented on Fig. 1. That includes manufacturing technology of silicon chips, hardware and software description languages, design tools, SoC methodology. ASIC GA PLD ASIC + PLD CPU+PLD ASIC+CPU+PLD 90 Nm-technology ASIC +CPU Design tools based on HDL SOCs based on: Fig. 1. Cause-effect relation on the EDA market Practical explanation of presented figure is that because of influence of SoC on ASIC and FPGA (PLD) designs it is started integration between them. On FPGA’s started to appear powerful embedded processors such as ARM and PowerPC. For example latest Xilinx Virtex II Pro FPGA is 4 embedded IBM PowerPC processors plus 10 million of programmable gates available for user. Design flows of FPGAs and ASICS also started to merge after announcing by Altera Structured ASIC flow. Where FPGA verified design is transferred to ASIC without any participation of the developer. That will influence world chip market – that is about $40 billions per year: 1) powerful processors, that are used on servers and working stations; 2) personal computers area, where Intel processors holding the leading place with $20 billions; 3) microcontrollers and signal processes generate to vendors $14 billion revenue every year. The 3rd segment is the most growing one from all three. Hardware development reached stage that number of transistors is growth is 60% per year, but their usage in project growing only 20% per year. That’s why we can see today rapid growth of number of SoCs. On that available space on a chip are transferred from the board all buses and peripherals of the developed system. That allows not only increasing productivity of whole digital system and make it with custom functionality, but significantly to reduce energy consumption and decrease physical size of final product. At the same time one of the main requirements of designing complex systems today is to use module approach. Where designer can reuse modules from previous projects or use IP(Intellectual Property)-core. For SoCs there are bunch of various ready to use processors with peripheral buses and libraries of standard peripherals. With different functionality, sizes, from simple interface to complicated 64bit processors that requires couple of millions transistors.Публікація New features of deductive fault simulation(EWDTW, 2004) Hahanov, V. I.; Obrizan, V. I.; Kiyaszhenko, A. V.; Pobezhenko, I. A.Design Automation Department, Kharkov National University of Radio Electronics, Lenin ave, 14, Kharkiv, 61166 Ukraine. E-mail: hahanov@kture.kharkov.ua This paper describes the Fast Backtraced DeductiveParallel Fault Simulation method. This method is oriented on processing large digital devices that are described in RTL or gate level format. Also in article are described data structures and algorithms for implementation of the method in the automated design for test (DFT) systems. The work is conditioned by importance of dramatic improve of test generation speed for complex digital devices implemented in ASICs. Well known automatic test generation and fault simulation systems from such vendors as Cadence, Mentor Graphics, Synopsis, Logic Vision, are oriented on processing of whole logic blocks (chips). But maximum size of such logic blocks is about hundred of thousands of equivalent gates and the processing time is several hours and more. It is not acceptable for today multi-million gates digital designs. Therefore, it is needed to develop new approach to the problem, that allows to speed-up digital system analysis and test generation. To solve this problem the new technology has been used, and the fast fault simulation method have been developed. Unit under test is a digital system, which can be implemented in ASIC. The system is described on HDL. Objectives are to develop high performance stuck-at-fault simulation method for evaluation of quality of generated tests for digital systems. Method should satisfy designers of multi-million gates devices.Публікація Use of parallelism in finite state machines. Mathematical level(EWDTW, 2004) Krivoulya, G. F.; Nemchenko, O.The method of the description of parallel systems using finite state machines is examined. The mathematical model proposed in this article can use both for the description and for synthesis of synchronous and asynchronous parallel systems with usage of abstractions of parallel programming: threads, processes, flags, mutexes, semaphores and complex synchronizations. One of methods of computing systems productivity rising is usage of parallelism in operation. Exists two ways of parallel data processing: strictly parallelism and command pipe [1]. The command pipe is present at personal computers, parallel processing is present, as a rule, in specialized computing systems - on parallel computers. Parallel computer architectures strongly differ one from each other. In There is a problem of code portability - the program fulfilled effectively on one system, practically does not use resources on another. In other words, there is weak program portability and the compiled code for parallel computers. Parallelism can be created on the basis of several computing units. Such system becomes complicated over availability of such units, commutative equipment and interconnection interfaces. It is much better to create the parallel arrangement in one chip with the required architecture. In this case the programmable logic CPLD, FPGA, can be used. In this article the mathematical model of a parallel finite state machine (FSM) digital automaton is discussed which can be used for the description of parallel algorithms and for its synthesis.Публікація Network safety. Problems and perspectives(EWDTW, 2004) Nemchenko, VolodymyrThe necessity of the information protection for networks is shown. Analysis of certain types of the networks attacks is given. Principles of protection of the information in networks and perspectives are shown.This paper analysis a state of arts in the Network Safety area. The problem is actual especially taking in consideration the escalating of the network attacks amount fixed daily in the Internet. This work estimates the situation with network safety. Summary classification of attacks with the analysis of the basic attacks types is given. Some characteristics of the main types of attacks are shown. In reality today almost each server is exposed to the attack having place several times in day. The information from CERT (Computer Emergency Response Team) shows the distribution of an amount of the incidents registered in Internet coupled to a network attacks on years since 1988 (6 incidents) till today (137.529 incidents in 2003) [http://www.cert.org]. In total 319.992 cases of network attacks have been fixed during this period. The figure 1 presents this distribution.Публікація Set operation speed-up of fault simulation(EWDTW, 2004) Zaychenko, S. A.; Parfentiy, A. N.; Kamenuka, E. A.; Ktiaman, H.In this paper there are presented data structures and algorithms for performing set theory operations upon lists of defects within deductive fault simulation method of digital systems. There are suggested 4 types of data structures and calculation procedures, which provide maximum performance for basic operations required for effective software implementation of the method. Hardware designers and manufacturers demand significant performance acceleration for fault simulation and automatic test patterns generation tools (ATPG) [1] for large-scale digital systems, being targeted into the application specific integrated circuits (ASIC’s). Over 50% of existing ATPG systems [1-4] use deductive method of fault simulation to obtain table of faults, covered by the applied test. The performance distribution analysis of computation cycle during test-vector processing within deductive method (fig. 1) shows, that about 70% of time is spent on performing set theory’s operations upon lists of faults: union, intersection and complement (difference). That’s why the software implementation performance of the deductive method strongly depends on implementation efficiency of the set theory operations.Software implementation of the set operations may use classic storage data structures and algorithms, which efficiency differs for various numbers of elements under processing. Relatively to deductive fault simulation method, the particular computations at the same time are performed upon sets with various range of elements number. That’s why, there is no well-known data structure in general programming, which provides acceptable performance of implementation of the set operations for deductive fault simulation method. The research goal is to analyze and select optimal data structures and processing algorithms of set theory operations, that will provide the highest performance and lowest memory usage for software implementation of the deductive fault simulation method. The research tasks include: – analysis of classic data structures, being used in discrete mathematics [5,6] and general programming [7-9] for implementation of set theory operations; – development of the computation strategy, which provides high speed and low memory usage for fault simulation of large-scale digital systems; – efficiency assessment of the developed strategy.Публікація Hierarchical hybrid approach to complex digital systems testing(EWDTW, 2005) Hahanova, I. V.; Obrizan, V.; Ghribi, W.; Yeliseev, V.; Ktiaman, H.; Guz, O. A.This paper offers approach to complex digital system testing based on hierarchy scaling during diagnosis experiment. Several models of testing are proposed. Main principles of testing system organization are given. Such approach allows significant reducing overall system testing and verification time.Публікація Assertions based verification for systemc(EWDTW, 2005) Forczek, M.; Zaychenko, S.The Assertions Based Verification (ABV) has gained worldwide acceptance as verification methodology of electronic systems designs. There was number of papers [1-3] that explain in-depth this methodology. The original concept of assertion comes from software development where it (in particular the assert() macro defined in C language [4]) has proved to be a very powerful tool for automatic bug and regression detection [5]. Assertions for hardware designs employ Linear Time Logic (LTL) to define expected and/or forbidden behavior. The foundation for ABV are Hardware Verification Languages (HVLs). HVLs combine semantics of LTL with constructs for building reusable verification IP units. Verification IP units need to be bind to some design for effective use. Thus HVLs provide constructs to specify connections with models in Hardware Description Languages (HDLs). Most of ABV implementations are part of HDL–based integrated design environments (IDEs). The SystemC open initiative [6] provides an alternative to HDLs as it enables C++ [7] – the industry strength notation for complex systems – with hardware concepts of RTL and system-level in form of C++ templates library. In its original approach SystemC models are processed standard C++ toolset and executed as standalone applications. SystemC became a very popular environment for modeling at system-level abstraction. The HDL-based IDEs offer co-simulation capabilities with SystemC engine but it still remain external unit to the HDL simulator. The idea of applying ABV to the SystemC designs is natural step of HDL and SystemC environments integration. Since HDL design can be co-simulated with SystemC model, there is an easy way to associate verification unit with SystemC one: the SystemC unit needs to be connected to HDL wrapper unit that will provide entry point for verification unit bind. This method doesn’t require any additional tools assuming availability of HDL simulator.Публікація Verification and testing RKHS series summation method for modelling radio electronic devices(EWDTW, 2005) Chumachenko, S. V.; Chugurov, I. N.; Chugurova, V. V.Reproducing Kernel Hilbert Space (RKHS) for Series Summation that allows analytically obtaining alternative representations for series in the finite form is developed. To increase efficiency of solving of computational tasks there are used mathematical co-processors, which implement most efficient ways of computing equations, integrals, differential coefficients, ets. It is obvious that after discovering of new methods of increasing computation accuracy and decreasing computation time it is necessary to re-implement mathematical coprocessors or use new generation of IP-cores in PLD, Gate Array, ASIC designs. The method of reduction of computation of certain types of series to exact function that is widely used during calculation of parameters of high radio frequency devices was presented in [1-4]. This method decrease computation time of such tasks in tens and hundred times and its inaccuracy is equals to zero. The purpose of the investigation is verification and testing Series Summation Method in RKHS for modelling radio electronic devices.Публікація Assertions-based mechanism for the functional verification of the digital designs(EWDTW, 2005) Hahanov, V. I.; Yegorov, O.; Zaychenko, S.; Parfentiy, A.; Kaminska, M.; Kiyaschenko, A. V.According to [1] the verification cost of the digital devices, designed on the base of ASIC, IP-core, SoC technologies, takes up to 70% of the overall design cost. Similarly, up to 80% of the project source code implements a testbench. Reducing these two mentioned parameters minimizes timeto-market, and this is one of the main problems for the world-leading companies in the area of Electronic Design Automation (EDA). The goal of the verification tasks is to eliminate all design errors as early as possible to meet the requirements of the specification. Passing the error through the subsequent design stages (from a block to a chip, and later to a system) each time increases the cost of it’s elimination. Validation – a higher-level verification model – confirms the correctness of the project against the problems in the implementation of the major specified functionality. The goal of this paper is to noticeably decrease the verification time by extending the design with software-based redundancy – the assertions mechanism [2-5], which allows to simply analyze the major specified constraints during the device simulation process and to diagnose the errors in case of their detection. To achieve the declared goal it is necessary to solve the following problems: 1. To formalize the assertions-based product verification process model. 2. To develop the software components for synthesis and analysis of the assertions for the functionality, blocks and the entire system. 3. To get experimental confirmation of the benefits from using assertions to reduce time-to-market or, in other words, to noticeably reduce verification and overall design time.Публікація High level FSM design transformation using state splitting(EWDTW, 2005) Kulak, E.; Kovalyov, E.; Syrevitch, Ye.; Grankova, E.One of the problems in the testbench generation for extended finite state machines (EFSM) is existence of internal variables. In fact the usage of these variables in the condition of transition increases real quantity of states by orders. Even for a variable with bit length 20 it leads to the state explosion problem. But for some control unit it is possible to make redesign of the project by including state variables to state register. The transformation algorithm contains phases of state splitting, transition splitting, unreachable (dead) state reduction and equivalent states minimization. The results of such transformation can be used for design analysis, optimization, validation, verification, synthesis and implementation. This paper was motivated by author’s work in the project ASFTest – a testbench generator for Aldec finite state machines. Graphical user interface used in state-of-the-art software allows to create environment for design entry with finite state machine abstract usage. Such form of design description is used in many software and hardware design tools like StatedCAD, FPGA Advantage, Stateworks, Stateflows, etc. The algorithm is described in the graphical way using the extended FSM notation. VHDL is chosen as target language. Synthesis is made by Xilinx synthesis tool which is included in Xilinx Webpack environment. The target device is CPLD Coollrunner II.Публікація Brainlike computing(EWDTW, 2005) Shabanov-Kushnarenko, Yu.; Klimushev, V.; Lukashenko, O.; Nabatova, S.; Obrizan, V.; Protsay, N.This paper offers mathematical foundation of brainlike computer. New approach to making artificial intelligence is discussed: the human intelligence is considered as some material embodiment of the mechanism of logic. Also hardware efficiency of logic net implementation is shown.Urgency of research is determined by the necessity of design of the parallel computer for significant performance increase in comparison with software implementation on von Neumann architectures. The goal of the research – design of the parallel computer operation by principles of a humane brain and designed on modern element base. To reach the goal it is necessary to solve the following tasks: 1) designing the new method of artificial intelligence: the humane intelligence is considered as some material implementation of the mechanism of logic; 2) algebraization of logic; 3) formalization of logic net model; 4) developing logic synthesis procedures for logic net; 5) designing logic net design flow; 6) analysis of hardware implementation efficiency. Quickly progressing computerization and informatization demand constant increase of productivity of electronic computers. However, it is more and more difficult to do it. Reserves of increasing the speed of computing elements of the computer are getting exhausted. There is a way of escalating a number of simultaneously operating elements in the computer processor. Nowadays there is a practical possibility to build computers with the number of elements up to 108, based on successes of microminiaturization and reduction in price of electronic elements and on achievements in the field of automation of design and manufacturing of computers. However, with the present serial computers operation based on the principle of program control by J. von Neumann, it is senseless to do this, as there is only a small number of elements in operation during each period of time in them simultaneously. Attempts of conversion to parallel machines do not provide the expected growth of their productivity. For example, productivity of multiprocessing computers does not grow proportionately to the number of processors available in them as, apparently, it should be, but much slower. There are essential difficulties in attempts of creation of high-efficiency neurocomputers, which are constructed as formal neuron networks. Meanwhile, there is the "computer" created by nature, namely – a human brain for which the problem of high-grade parallelism of information processing is completely solved. Human brain is a slow mover in comparison with the modern computer. Its “clock frequency” can be estimated by throughput of nervous fibers. It is known, that each nervous fiber can pass not more than 103 pulses per a second. Through the conductors of modern computers it can be transferred about 109 pulses per a second. Hence, the computer surpasses a human brain in terms of speed of work of computing elements in 109:103=106 times. And nevertheless, the brain, due to a parallel principle of action, works faster and is capable to solve immeasurably more difficult tasks, than the most powerful modern computers with program control. It is caused by the fact that the human brain incorporates about 1015 computing elements (acted by synapses – interfaces between the ends of nervous fibers), and all of them are operating simultaneously, according to neurophysiology. In serial computers at any moment only the small number of elements operates in parallel.Публікація Design of Wavelet Filter Bank for JPEG 2000 Standard(EWDTW, 2006) Hahanova, I. V.; Fomina, E.; Sorudeykin, K.; Hahanov, V. I.; Bykova, V.Models, method and hardware implementation of lifting-based wavelet filter scheme for JPEG 2000 standard are proposed. JPEG 2000 image compression standard is used for data transmission, print and scan of images, digital photography. Low-pass and highpass filters for implementing JPEG 2000 transformation are described. Obtained results were compared with the same parameter of other discrete wavelet transformation (DWT) devices that were proposed in others references. This work purpose is essential speed growing of the ad hoc pipelining lifting-based DWT hardware implementation. JPEG2000 is new image compression algorithm based on discrete wavelet transformation of input data. This technique is a next development of JPEG group. It could be used for data transmitting in Internet, for image printing and scanning, for digital photography. Transform time reduce due to ad hoc SoC architectures essential increases the device feasibility attractiveness. To achieve this purpose the next challenges have been solved: 1. Digital models and their transformation methods were considered. 2. Lifting-based wavelet transformation hardware architecture was designed. 3. Control algorithm for DWT was created. 4. DWT-device on Xilinx FPGA was implemented. 5. Digital system testing and verification, different device version speeds and SNR were compared. Discrete wavelet transformation device architecture, which doesn’t use external memory that increase the device speed, was developed. Also it allows reducing device cost. IP Core for JPEG2000 encoder/decoder SoC was proposed. In this work it was put emphasis on the fast control block design, not only arithmetic blocks that was made in considering references. It allows increasing speed the whole device. The speed analysis for devices implemented on different Xilinx FPGA series with different memory types and transformation image sizes was done. The device was compared with existing prototypes by speed and area. Scientific novelty is the pipelining DWT device that is intent to use as IP core and implement in programmable chip. The proposed device has more simple ALU part, doesn’t use external memory, and so is faster and chipper than existing analogs. The practical significance is proposition of the simple, high technology and effective DWT device that have high speed and low power consumption. It is its advantage in over software implementation of the IEEE JPEG2000 standard. Further work steps: 1. DWT and IDWT device design for 5/3 and 9/7 JPEG2000 filter banks. 2. DWT and IDWT device implementation using Xilinx Virtex-4 DSP processor.Публікація SUM IP Core Generator – Means for Verification of Models-Formulas for Series Summation in RKHS(EWDTW, 2006) Hahanov, V. I.; Chumachenko, S. V.; Skvortsova, Olga; Melnikova, OlgaProgram system SUM IP Core Generator – means for verification of models – formulas of series summation in Reproducing Kernel Hilbert Space (RKHS) which allows to carry out input of the description of the model-formula with the help of the GUI-interface is offered; to model models – formulas with the help of software products Mathematica, Sinplify, Modelsim, Riviera, Active HDL; to generate initial files IP-core in languages of the description of equipment VHDL, Verilog, System C; to generate scripts – files for modelling, synthesis, implementation, time modelling; to synthesize tests, parameters, conditions for verification on basis Testbench; to carry out post-synthesis modelling for revealing mistakes in codes. The program system SUM IP Core Generator is proposed. Its structure is represented on Fig. 1. The Purpose of this system is essential reduction of time for data preparation by use of the user-friendly GUI-interface with a view of the subsequent modelling for definition of adequacy and accuracy of modelsformulas, and also automatic generation of the HDLcode considered in quality IP Сore. Solved problems (see fig. 1): 1. Input of the description of the model-formula with the help of the GUI-interface. 2. Modelling models-formulas with the help of software products Mathematica, Sinplify, Modelsim, Riviera, Active HDL. 3. Generation of initial files IP-core in languages of the description of equipment VHDL, Verilog, System C. 4. Generation of scripts-files for modelling, synthesis, implementation, time modelling. 5. Synthesis of tests, parameters, conditions for verification on the basis of generating Testbench. 6. Post-synthesis modelling for revealing mistakes in codes.Публікація System Level Methodology for Functional Verification Soc(EWDTW, 2006) Adamov, A.; Zaychenko, S.; Myroshnychenko, Y.; Lukashenko, O.Building a verification environment and the associated tests is a highly time-consuming process. Most project reports indicate that between 40% and 70% of the entire effort of a project is spent on verification, with 70% being much closer to the normal level for successful projects. This high level of effort indicates that the potential gains to be made with successful re-use are significant. Most projects do not start with a complete set of hardware designs available for a functional verification. Usually a design comes together as smaller blocks. Then the blocks are integrated into larger blocks, which may eventually be integrated into a system. That is reason for performing functional verification at a system level. The paper describes the system-level modeling environment for a functional verification System-on-a-Chip models. System level allow design teams to rapidly create large system-on-a-chip designs (SOCs) by integrating premade blocks that do not require any design work or verification. One of the hottest topics in embedded system design today is Electronic System Level (ESL) design. Although the idea of being able to describe a system at an abstract level has been around for a decade, only now are various parts of the design flow becoming available to make it practical. ESL describes a Systemon- chip (SoC) design in an abstract enough and fast enough way to explore the design space and provide virtual prototypes for hardware and software implementation. It is becoming a fundamental part of the design flow because we can now use it throughout the iterative design process rather than just in the early system architecting phase. ESL provides tools and methodologies that let designers describe and analyze chips on a high level of abstraction, easing the pain of designing electronic systems which would otherwise be too costly, complex or time consuming to create. The adoption of ESL can be seen in the same light as the transition to register transfer level (RTL) methodologies 10-15 years ago when complexity and time-to-market pressures obliged the industry to step up to another design level. As designs become larger with more and more IP blocks, engineers will re-use more IP. ESL methodologies that enable platform-based design will be increasingly necessary to create and test a complete system. For the most complex SoCs, IP reuse can only help up to a point. For a 40-million-gate SoC, filling even 75% of the device with existing IP leaves 10 million gates to design with original content. ESL methodologies which allow rapid creation of new blocks are likely to be leveraged by designers to quickly develop and verify original content to fill the 10 million gate void while meeting time-to- market requirements. Among the 24% percent of respondents who have implemented some form of ESL design methodology an overwhelming 87% believe ESL provides an acceptable or greater return on investment.Публікація Реализация процедур импликации на графовой структуре(Науково-технічний журнал : Радіоелектронні і комп'ютерні системи (РЕКС), 2006) Шкиль, А. С.; Чегликов, Д. И.; Зинченко, Д. Е.В данной работе были разработаны внутреннее представление и программная модель процедур прямой и обратной импликации на графовых структурах с целью верификации фрагмента VHDL-кода.Публікація Dynamic Register Transfer Level Queues Model for High-Performance Evaluation of the Linear Temporal Constraints(EWDTW, 2006) Zaychenko, S.; Hahanov, V.; Zaharchenko, O.Today the Assertions Based Verification (ABV) is by all means the most effective verification technology for SoC designs. Assertions provide basic blocks for building functional verification concept. Assertions simply catch a lot of design errors on early phases. This paper suggests new effective algorithmic model for assertions checking within the testbench-based simulation. The algorithms for handling key temporal operators from Property Specification Language (PSL) are described. Paper demonstrates the advantages of the suggested model over existing equivalents - in simulation performance, verification efficiency and model extensibility. Obviously, the verification process is a very complex and a very expensive part of the modern SoC design cycle. This process consists of searching the model for mistakes, causing the design to violate the functional specification, localizing the problems reasons and applying the fixtures. According to the EDA industry experts opinion, the cost of verification in ASIC [1] designs often overheads the 70% of the entire project budget [2]. Such high cost of the system quality is driven by several factors, in particular: – A large amount of missed details and mistakes in the work of SoC designers in the RTL code, verification engineers mistakes in the testbenches, also, the inevitable ambiguities of the original design specification; – drawbacks in the choosen design flows, complicating the bugs localization and fixtures, missing the possibilities for early discovery of the typical problems; – relatively low performance and bugs within the selected automation tools, which reach the quality and performance goals much slower than the input design complexity raises. Resolving these problems altogether and degrading the SoC verification cycle cost is currently a primary goal for the entire EDA world [3]. Leading EDA companies and industry experts are focused on developing the new generation of complex design verification methods, which will be able to: – minimize the human participation in the routine design and verification procedures, which will obviously decrease the probability of mistakes in several times; – lead to catching the largest amount of problems on the early design phases, reducing the average fixture cost; – upgrade the performance and stability of the design verification systems by raising the abstraction level both for the SoC models and for the testing stimulus. There are two basic directions in modern SoC verification methods – dynamic methods [2,4], based on the simulation, and static, or formal methods [5,6], based on the mathematical proof of certain system properties without testing stimulus. There are also hybrid methods [7] used, which assume usage of the simulation and functional coverage results to improve the performance of formal methods. This work is focused on the assertions-based verification technology [8,9], playing its role both in dynamic and formal methods.Публікація Logic and Fault Simulation Based on Shared-Memory Processors(EWDTW, 2006) Obrizan, V.; Shipunov, V.; Gavryushenko, A.; Kashpur, O.Existing software in Electronic Design Automation shows lack of dual-core processors support. As a result, we see bad processing resources utilization. This work-in-progress is devoted to exploration of existing approaches to parallel logic and fault simulation on dual-core workstations. The scale of modern digital system-on-chips continuously increases the complexity of testing during design and manufacturing. It makes the problem of fault simulation and automatic test pattern generation more and more relevant. The performance of fault and fault-free simulation software and the speed of workstations grow noticeably slower, than the structural and functional complexity of digital systems, or the verification cost. In the era of embedded systems, it is easy to create complex devices using system-level approach, but at the same time it is hard to simulate, verify and test such devices. Previously, engineers used highperformance workstations to reduce simulation run time. But nowadays, microprocessors frequencies stop rising, and to solve performance problems, computers enter an era of a multi-core processing. Multiprocessors came to home and office desktops, not only to supercomputer centers. Thus, GHzs don’t determine the performance of the workstation anymore. Also it’s well known, that single-threaded application or serial algorithm (even best optimized for serial processing) shows no expected acceleration on multi-processor systems. In the present days, each application must be designed to gain maximum performance of multi-core architectures. This statement is a baseline of the proposed research. The goal of research – reduce simulation run-time using efficient shared-memory processing. Research tasks: 1) analyze existing algorithms and software products on the subject of serial and parallel data processing; 2) develop parallel algorithms for efficient shared memory utilization; 3) develop software implementation and conduct verification and testing.