Кафедра автоматизації проектування обчислювальної техніки (АПОТ)
Постійний URI для цієї колекції
Перегляд
Перегляд Кафедра автоматизації проектування обчислювальної техніки (АПОТ) за датою видання
Зараз показано 1 - 20 з 537
Результатів на сторінку
Варіанти сортування
Публікація Дизайнер для создания гипертекстовых учебных материалов(УАДО, 2002) Шеховцов, Б. Г.; Капустин, С. В.In article the prnciples of construction of the system for creation of electronic materials having similar design are considered that allows using them as the hypertexttutonal. Used technologies - HTML, CSS, PHP, JavaScript. The fragments of the code with the description of assignment are resulted.Публікація Управление программными лабораторными средствами в системе дистанционного обучения(2002) Шеховцов, Б. Г.; Середенко, В. А.With development of remote training there is a problem of management of all program operating time located on a server part of system. In given article one of variants of the decision of a problem of management is considered by the software developed on faculty of the COMPUTERПублікація Технологии обучения автоматизированному проектированию цифровых систем(УАДО, 2002) Хаханов, В. И.The experience of students education in special groups of design automation oriented on future working in firms of Europe and America is offered. There are several important courses: Digital System Design, UNIX, VHDL & Verilog, C++, English for Design which are necessary for a practical solving problem in area of design entry, modeling & simulation, synthesis, place & route and implementation.Публікація Структура электронного учебника «Диагностика и моделирование»(УАДО, 2002) Шкиль, A. C.; Севастьянов, А. Б.; Мазур, М. С.In the given work the problem of file and navigation structure of interactive tutorials is considered As an example of interactive lutorial the “Diagnostics and simulation” has been represented. Three level thematical structurization is applied. Completing of information and navigation for sites is considered.Публікація Методика оценивания в компьютерной системе тестирования знаний(УАДО, 2002) Шкиль, A. C.; Напрасник, С. В.; Чумаченко, С. В.The issues of knowledge evaluation in a learning testing system are considered. The developed system uses questions with several numbers of alternate answers in one seance. The formula of range limits calculation of rating scale is offered. It depends on a number of questions in a testing seance, a number of alternatives in each question and a number of marks in a rating scale. The factor of random selection of answers is taken into account in the proposed formulaПублікація Проектирование цифровых систем с использованием языка VHDL(Харьковский национальный университет радиоэлектроники, 2003) Семенець, В. В.; Хаханова, И. В.; Хаханов, В. И.Публікація Reproducing kernel hilbert space methods for cad tools(EWDTW, 2004) Chumachenko, S. V.; Khawar, Parvez; Gowher, MalikThe review of known RKHS-methods for analysis of current state in science investigations is represented. The place of Series Summation Method in Reproducing Kernel Hilbert Space (RKHS) is determined. The new results obtained by this method are discussed. Reproducing Kernel Hilbert Space (RKHS) methods are interesting both pure theoretically and applied. RKHS theory has been a well studied topic, stemming from the original works of [1] to more recent studies on their application by [2, 3, 8-11]. Mathematical models based on RKHS and causal operators are presented in [3]. They are used at Pattern Recognition [4], Digital Data Processing [5], Image Compression [6], Computer Graphics [7]. Mentioned directions are described by mathematical tool – theory of wavelets [4]. RKHS methods are base tool in exact incremental learning [8], in statistical learning theory [2, 9]. The general theory of reproducing kernels which is combined with linear mappings in the framework of Hilbert spaces is considered in [2]. A framework for discussing the generalization ability of a trained network in the original function space using tools of functional analysis based on RKHS is introduced in [8]. Special kind of kernel based approximation scheme is also closely linked to regularization theory [10] and Support Vector Machines based approximation schemes [11] (Fig.).Публікація Verification tests generation features for microprocessor- based structures(EWDTW, 2004) Krivoulya, G. F.; Shkil, A. S.; Syrevitch, Ye.; Antipenko, O.A model of a microprocessor - based device as a bichromatic multidigraph with vertexes of two types is offered. Test generation features for functional testing using the updated algorithm of path activation in a structural model are described. The range method of data representation of different format data is introduced. Algorithms for execution of direct implication and backtracing of different types of operations and their program realization are represented. All set of methods of the determined test generation for digital devices can be divided into two large groups: structural and functional. Originally structural methods were oriented to a gate level of model performance of digital devices. However growth of complexity and rise of a component integration have led to a fact that models of increased integration elements began to be applied as the primitive elements (PE) of devices [1,2]. To the advantages of such approach it is possible to refer simple construction of a model of the device and formalizing of test generation procedures, and to the lacks - large dimension of a device model; and difficulties on creation and maintaining of the library of PE models, which can contain hundreds components. With the purpose of overcoming these lacks the functional approach to construction of the tests was developed and has received a wide circulation [3, 4]. It can be used for digital devices of any complexity, including microsystems with program and microprogram control, as it allows receiving high level models of such devices. However functional methods are badly formalized because different types of function boxes, such as control block, operational block, address block etc. are present in microsystems. It is not obviously possible to formalize the method, which would have a possibility to handle so heterogeneous types of devices on the basis of the uniform approach. In the given work the method of tests generation which is further development of the functional approach is offered. On a design stage of the digital device its decomposition on so-called homogeneously tested segments is carried out. The authors consider a method of tests generation for one of types of segments, namely, for the operational device (OD).Публікація Co-design technology of soc based on active-HDL 6.2(EWDTW, 2004) Hyduke, S.; Yegorov, A. A.; Guz, O. A.; Hahanova, I. V.It is represented technology of designing and verification of digital systems-on-a-chip (SoC), based on the experience of design of hardware and software components of SoC in one environment. It reflects today situation of variety of available silicon, software and hardware description languages, design tools. There are also presented recommendations and examples. On today’s EDA market there are 3 major target silicon technologies that define computer world today – programmable devices, gate arrays and ASICs. They and relations between them are presented on Fig. 1. That includes manufacturing technology of silicon chips, hardware and software description languages, design tools, SoC methodology. ASIC GA PLD ASIC + PLD CPU+PLD ASIC+CPU+PLD 90 Nm-technology ASIC +CPU Design tools based on HDL SOCs based on: Fig. 1. Cause-effect relation on the EDA market Practical explanation of presented figure is that because of influence of SoC on ASIC and FPGA (PLD) designs it is started integration between them. On FPGA’s started to appear powerful embedded processors such as ARM and PowerPC. For example latest Xilinx Virtex II Pro FPGA is 4 embedded IBM PowerPC processors plus 10 million of programmable gates available for user. Design flows of FPGAs and ASICS also started to merge after announcing by Altera Structured ASIC flow. Where FPGA verified design is transferred to ASIC without any participation of the developer. That will influence world chip market – that is about $40 billions per year: 1) powerful processors, that are used on servers and working stations; 2) personal computers area, where Intel processors holding the leading place with $20 billions; 3) microcontrollers and signal processes generate to vendors $14 billion revenue every year. The 3rd segment is the most growing one from all three. Hardware development reached stage that number of transistors is growth is 60% per year, but their usage in project growing only 20% per year. That’s why we can see today rapid growth of number of SoCs. On that available space on a chip are transferred from the board all buses and peripherals of the developed system. That allows not only increasing productivity of whole digital system and make it with custom functionality, but significantly to reduce energy consumption and decrease physical size of final product. At the same time one of the main requirements of designing complex systems today is to use module approach. Where designer can reuse modules from previous projects or use IP(Intellectual Property)-core. For SoCs there are bunch of various ready to use processors with peripheral buses and libraries of standard peripherals. With different functionality, sizes, from simple interface to complicated 64bit processors that requires couple of millions transistors.Публікація New features of deductive fault simulation(EWDTW, 2004) Hahanov, V. I.; Obrizan, V. I.; Kiyaszhenko, A. V.; Pobezhenko, I. A.Design Automation Department, Kharkov National University of Radio Electronics, Lenin ave, 14, Kharkiv, 61166 Ukraine. E-mail: hahanov@kture.kharkov.ua This paper describes the Fast Backtraced DeductiveParallel Fault Simulation method. This method is oriented on processing large digital devices that are described in RTL or gate level format. Also in article are described data structures and algorithms for implementation of the method in the automated design for test (DFT) systems. The work is conditioned by importance of dramatic improve of test generation speed for complex digital devices implemented in ASICs. Well known automatic test generation and fault simulation systems from such vendors as Cadence, Mentor Graphics, Synopsis, Logic Vision, are oriented on processing of whole logic blocks (chips). But maximum size of such logic blocks is about hundred of thousands of equivalent gates and the processing time is several hours and more. It is not acceptable for today multi-million gates digital designs. Therefore, it is needed to develop new approach to the problem, that allows to speed-up digital system analysis and test generation. To solve this problem the new technology has been used, and the fast fault simulation method have been developed. Unit under test is a digital system, which can be implemented in ASIC. The system is described on HDL. Objectives are to develop high performance stuck-at-fault simulation method for evaluation of quality of generated tests for digital systems. Method should satisfy designers of multi-million gates devices.Публікація Use of parallelism in finite state machines. Mathematical level(EWDTW, 2004) Krivoulya, G. F.; Nemchenko, O.The method of the description of parallel systems using finite state machines is examined. The mathematical model proposed in this article can use both for the description and for synthesis of synchronous and asynchronous parallel systems with usage of abstractions of parallel programming: threads, processes, flags, mutexes, semaphores and complex synchronizations. One of methods of computing systems productivity rising is usage of parallelism in operation. Exists two ways of parallel data processing: strictly parallelism and command pipe [1]. The command pipe is present at personal computers, parallel processing is present, as a rule, in specialized computing systems - on parallel computers. Parallel computer architectures strongly differ one from each other. In There is a problem of code portability - the program fulfilled effectively on one system, practically does not use resources on another. In other words, there is weak program portability and the compiled code for parallel computers. Parallelism can be created on the basis of several computing units. Such system becomes complicated over availability of such units, commutative equipment and interconnection interfaces. It is much better to create the parallel arrangement in one chip with the required architecture. In this case the programmable logic CPLD, FPGA, can be used. In this article the mathematical model of a parallel finite state machine (FSM) digital automaton is discussed which can be used for the description of parallel algorithms and for its synthesis.Публікація Network safety. Problems and perspectives(EWDTW, 2004) Nemchenko, VolodymyrThe necessity of the information protection for networks is shown. Analysis of certain types of the networks attacks is given. Principles of protection of the information in networks and perspectives are shown.This paper analysis a state of arts in the Network Safety area. The problem is actual especially taking in consideration the escalating of the network attacks amount fixed daily in the Internet. This work estimates the situation with network safety. Summary classification of attacks with the analysis of the basic attacks types is given. Some characteristics of the main types of attacks are shown. In reality today almost each server is exposed to the attack having place several times in day. The information from CERT (Computer Emergency Response Team) shows the distribution of an amount of the incidents registered in Internet coupled to a network attacks on years since 1988 (6 incidents) till today (137.529 incidents in 2003) [http://www.cert.org]. In total 319.992 cases of network attacks have been fixed during this period. The figure 1 presents this distribution.Публікація Set operation speed-up of fault simulation(EWDTW, 2004) Zaychenko, S. A.; Parfentiy, A. N.; Kamenuka, E. A.; Ktiaman, H.In this paper there are presented data structures and algorithms for performing set theory operations upon lists of defects within deductive fault simulation method of digital systems. There are suggested 4 types of data structures and calculation procedures, which provide maximum performance for basic operations required for effective software implementation of the method. Hardware designers and manufacturers demand significant performance acceleration for fault simulation and automatic test patterns generation tools (ATPG) [1] for large-scale digital systems, being targeted into the application specific integrated circuits (ASIC’s). Over 50% of existing ATPG systems [1-4] use deductive method of fault simulation to obtain table of faults, covered by the applied test. The performance distribution analysis of computation cycle during test-vector processing within deductive method (fig. 1) shows, that about 70% of time is spent on performing set theory’s operations upon lists of faults: union, intersection and complement (difference). That’s why the software implementation performance of the deductive method strongly depends on implementation efficiency of the set theory operations.Software implementation of the set operations may use classic storage data structures and algorithms, which efficiency differs for various numbers of elements under processing. Relatively to deductive fault simulation method, the particular computations at the same time are performed upon sets with various range of elements number. That’s why, there is no well-known data structure in general programming, which provides acceptable performance of implementation of the set operations for deductive fault simulation method. The research goal is to analyze and select optimal data structures and processing algorithms of set theory operations, that will provide the highest performance and lowest memory usage for software implementation of the deductive fault simulation method. The research tasks include: – analysis of classic data structures, being used in discrete mathematics [5,6] and general programming [7-9] for implementation of set theory operations; – development of the computation strategy, which provides high speed and low memory usage for fault simulation of large-scale digital systems; – efficiency assessment of the developed strategy.Публікація Hierarchical hybrid approach to complex digital systems testing(EWDTW, 2005) Hahanova, I. V.; Obrizan, V.; Ghribi, W.; Yeliseev, V.; Ktiaman, H.; Guz, O. A.This paper offers approach to complex digital system testing based on hierarchy scaling during diagnosis experiment. Several models of testing are proposed. Main principles of testing system organization are given. Such approach allows significant reducing overall system testing and verification time.Публікація Assertions based verification for systemc(EWDTW, 2005) Forczek, M.; Zaychenko, S.The Assertions Based Verification (ABV) has gained worldwide acceptance as verification methodology of electronic systems designs. There was number of papers [1-3] that explain in-depth this methodology. The original concept of assertion comes from software development where it (in particular the assert() macro defined in C language [4]) has proved to be a very powerful tool for automatic bug and regression detection [5]. Assertions for hardware designs employ Linear Time Logic (LTL) to define expected and/or forbidden behavior. The foundation for ABV are Hardware Verification Languages (HVLs). HVLs combine semantics of LTL with constructs for building reusable verification IP units. Verification IP units need to be bind to some design for effective use. Thus HVLs provide constructs to specify connections with models in Hardware Description Languages (HDLs). Most of ABV implementations are part of HDL–based integrated design environments (IDEs). The SystemC open initiative [6] provides an alternative to HDLs as it enables C++ [7] – the industry strength notation for complex systems – with hardware concepts of RTL and system-level in form of C++ templates library. In its original approach SystemC models are processed standard C++ toolset and executed as standalone applications. SystemC became a very popular environment for modeling at system-level abstraction. The HDL-based IDEs offer co-simulation capabilities with SystemC engine but it still remain external unit to the HDL simulator. The idea of applying ABV to the SystemC designs is natural step of HDL and SystemC environments integration. Since HDL design can be co-simulated with SystemC model, there is an easy way to associate verification unit with SystemC one: the SystemC unit needs to be connected to HDL wrapper unit that will provide entry point for verification unit bind. This method doesn’t require any additional tools assuming availability of HDL simulator.Публікація Verification and testing RKHS series summation method for modelling radio electronic devices(EWDTW, 2005) Chumachenko, S. V.; Chugurov, I. N.; Chugurova, V. V.Reproducing Kernel Hilbert Space (RKHS) for Series Summation that allows analytically obtaining alternative representations for series in the finite form is developed. To increase efficiency of solving of computational tasks there are used mathematical co-processors, which implement most efficient ways of computing equations, integrals, differential coefficients, ets. It is obvious that after discovering of new methods of increasing computation accuracy and decreasing computation time it is necessary to re-implement mathematical coprocessors or use new generation of IP-cores in PLD, Gate Array, ASIC designs. The method of reduction of computation of certain types of series to exact function that is widely used during calculation of parameters of high radio frequency devices was presented in [1-4]. This method decrease computation time of such tasks in tens and hundred times and its inaccuracy is equals to zero. The purpose of the investigation is verification and testing Series Summation Method in RKHS for modelling radio electronic devices.Публікація Assertions-based mechanism for the functional verification of the digital designs(EWDTW, 2005) Hahanov, V. I.; Yegorov, O.; Zaychenko, S.; Parfentiy, A.; Kaminska, M.; Kiyaschenko, A. V.According to [1] the verification cost of the digital devices, designed on the base of ASIC, IP-core, SoC technologies, takes up to 70% of the overall design cost. Similarly, up to 80% of the project source code implements a testbench. Reducing these two mentioned parameters minimizes timeto-market, and this is one of the main problems for the world-leading companies in the area of Electronic Design Automation (EDA). The goal of the verification tasks is to eliminate all design errors as early as possible to meet the requirements of the specification. Passing the error through the subsequent design stages (from a block to a chip, and later to a system) each time increases the cost of it’s elimination. Validation – a higher-level verification model – confirms the correctness of the project against the problems in the implementation of the major specified functionality. The goal of this paper is to noticeably decrease the verification time by extending the design with software-based redundancy – the assertions mechanism [2-5], which allows to simply analyze the major specified constraints during the device simulation process and to diagnose the errors in case of their detection. To achieve the declared goal it is necessary to solve the following problems: 1. To formalize the assertions-based product verification process model. 2. To develop the software components for synthesis and analysis of the assertions for the functionality, blocks and the entire system. 3. To get experimental confirmation of the benefits from using assertions to reduce time-to-market or, in other words, to noticeably reduce verification and overall design time.Публікація High level FSM design transformation using state splitting(EWDTW, 2005) Kulak, E.; Kovalyov, E.; Syrevitch, Ye.; Grankova, E.One of the problems in the testbench generation for extended finite state machines (EFSM) is existence of internal variables. In fact the usage of these variables in the condition of transition increases real quantity of states by orders. Even for a variable with bit length 20 it leads to the state explosion problem. But for some control unit it is possible to make redesign of the project by including state variables to state register. The transformation algorithm contains phases of state splitting, transition splitting, unreachable (dead) state reduction and equivalent states minimization. The results of such transformation can be used for design analysis, optimization, validation, verification, synthesis and implementation. This paper was motivated by author’s work in the project ASFTest – a testbench generator for Aldec finite state machines. Graphical user interface used in state-of-the-art software allows to create environment for design entry with finite state machine abstract usage. Such form of design description is used in many software and hardware design tools like StatedCAD, FPGA Advantage, Stateworks, Stateflows, etc. The algorithm is described in the graphical way using the extended FSM notation. VHDL is chosen as target language. Synthesis is made by Xilinx synthesis tool which is included in Xilinx Webpack environment. The target device is CPLD Coollrunner II.Публікація Brainlike computing(EWDTW, 2005) Shabanov-Kushnarenko, Yu.; Klimushev, V.; Lukashenko, O.; Nabatova, S.; Obrizan, V.; Protsay, N.This paper offers mathematical foundation of brainlike computer. New approach to making artificial intelligence is discussed: the human intelligence is considered as some material embodiment of the mechanism of logic. Also hardware efficiency of logic net implementation is shown.Urgency of research is determined by the necessity of design of the parallel computer for significant performance increase in comparison with software implementation on von Neumann architectures. The goal of the research – design of the parallel computer operation by principles of a humane brain and designed on modern element base. To reach the goal it is necessary to solve the following tasks: 1) designing the new method of artificial intelligence: the humane intelligence is considered as some material implementation of the mechanism of logic; 2) algebraization of logic; 3) formalization of logic net model; 4) developing logic synthesis procedures for logic net; 5) designing logic net design flow; 6) analysis of hardware implementation efficiency. Quickly progressing computerization and informatization demand constant increase of productivity of electronic computers. However, it is more and more difficult to do it. Reserves of increasing the speed of computing elements of the computer are getting exhausted. There is a way of escalating a number of simultaneously operating elements in the computer processor. Nowadays there is a practical possibility to build computers with the number of elements up to 108, based on successes of microminiaturization and reduction in price of electronic elements and on achievements in the field of automation of design and manufacturing of computers. However, with the present serial computers operation based on the principle of program control by J. von Neumann, it is senseless to do this, as there is only a small number of elements in operation during each period of time in them simultaneously. Attempts of conversion to parallel machines do not provide the expected growth of their productivity. For example, productivity of multiprocessing computers does not grow proportionately to the number of processors available in them as, apparently, it should be, but much slower. There are essential difficulties in attempts of creation of high-efficiency neurocomputers, which are constructed as formal neuron networks. Meanwhile, there is the "computer" created by nature, namely – a human brain for which the problem of high-grade parallelism of information processing is completely solved. Human brain is a slow mover in comparison with the modern computer. Its “clock frequency” can be estimated by throughput of nervous fibers. It is known, that each nervous fiber can pass not more than 103 pulses per a second. Through the conductors of modern computers it can be transferred about 109 pulses per a second. Hence, the computer surpasses a human brain in terms of speed of work of computing elements in 109:103=106 times. And nevertheless, the brain, due to a parallel principle of action, works faster and is capable to solve immeasurably more difficult tasks, than the most powerful modern computers with program control. It is caused by the fact that the human brain incorporates about 1015 computing elements (acted by synapses – interfaces between the ends of nervous fibers), and all of them are operating simultaneously, according to neurophysiology. In serial computers at any moment only the small number of elements operates in parallel.Публікація Design of Wavelet Filter Bank for JPEG 2000 Standard(EWDTW, 2006) Hahanova, I. V.; Fomina, E.; Sorudeykin, K.; Hahanov, V. I.; Bykova, V.Models, method and hardware implementation of lifting-based wavelet filter scheme for JPEG 2000 standard are proposed. JPEG 2000 image compression standard is used for data transmission, print and scan of images, digital photography. Low-pass and highpass filters for implementing JPEG 2000 transformation are described. Obtained results were compared with the same parameter of other discrete wavelet transformation (DWT) devices that were proposed in others references. This work purpose is essential speed growing of the ad hoc pipelining lifting-based DWT hardware implementation. JPEG2000 is new image compression algorithm based on discrete wavelet transformation of input data. This technique is a next development of JPEG group. It could be used for data transmitting in Internet, for image printing and scanning, for digital photography. Transform time reduce due to ad hoc SoC architectures essential increases the device feasibility attractiveness. To achieve this purpose the next challenges have been solved: 1. Digital models and their transformation methods were considered. 2. Lifting-based wavelet transformation hardware architecture was designed. 3. Control algorithm for DWT was created. 4. DWT-device on Xilinx FPGA was implemented. 5. Digital system testing and verification, different device version speeds and SNR were compared. Discrete wavelet transformation device architecture, which doesn’t use external memory that increase the device speed, was developed. Also it allows reducing device cost. IP Core for JPEG2000 encoder/decoder SoC was proposed. In this work it was put emphasis on the fast control block design, not only arithmetic blocks that was made in considering references. It allows increasing speed the whole device. The speed analysis for devices implemented on different Xilinx FPGA series with different memory types and transformation image sizes was done. The device was compared with existing prototypes by speed and area. Scientific novelty is the pipelining DWT device that is intent to use as IP core and implement in programmable chip. The proposed device has more simple ALU part, doesn’t use external memory, and so is faster and chipper than existing analogs. The practical significance is proposition of the simple, high technology and effective DWT device that have high speed and low power consumption. It is its advantage in over software implementation of the IEEE JPEG2000 standard. Further work steps: 1. DWT and IDWT device design for 5/3 and 9/7 JPEG2000 filter banks. 2. DWT and IDWT device implementation using Xilinx Virtex-4 DSP processor.