Перегляд за автором "Hahanova, I. V."
Зараз показано 1 - 9 з 9
Результатів на сторінку
Варіанти сортування
Публікація Co-design technology of soc based on active-HDL 6.2(EWDTW, 2004) Hyduke, S.; Yegorov, A. A.; Guz, O. A.; Hahanova, I. V.It is represented technology of designing and verification of digital systems-on-a-chip (SoC), based on the experience of design of hardware and software components of SoC in one environment. It reflects today situation of variety of available silicon, software and hardware description languages, design tools. There are also presented recommendations and examples. On today’s EDA market there are 3 major target silicon technologies that define computer world today – programmable devices, gate arrays and ASICs. They and relations between them are presented on Fig. 1. That includes manufacturing technology of silicon chips, hardware and software description languages, design tools, SoC methodology. ASIC GA PLD ASIC + PLD CPU+PLD ASIC+CPU+PLD 90 Nm-technology ASIC +CPU Design tools based on HDL SOCs based on: Fig. 1. Cause-effect relation on the EDA market Practical explanation of presented figure is that because of influence of SoC on ASIC and FPGA (PLD) designs it is started integration between them. On FPGA’s started to appear powerful embedded processors such as ARM and PowerPC. For example latest Xilinx Virtex II Pro FPGA is 4 embedded IBM PowerPC processors plus 10 million of programmable gates available for user. Design flows of FPGAs and ASICS also started to merge after announcing by Altera Structured ASIC flow. Where FPGA verified design is transferred to ASIC without any participation of the developer. That will influence world chip market – that is about $40 billions per year: 1) powerful processors, that are used on servers and working stations; 2) personal computers area, where Intel processors holding the leading place with $20 billions; 3) microcontrollers and signal processes generate to vendors $14 billion revenue every year. The 3rd segment is the most growing one from all three. Hardware development reached stage that number of transistors is growth is 60% per year, but their usage in project growing only 20% per year. That’s why we can see today rapid growth of number of SoCs. On that available space on a chip are transferred from the board all buses and peripherals of the developed system. That allows not only increasing productivity of whole digital system and make it with custom functionality, but significantly to reduce energy consumption and decrease physical size of final product. At the same time one of the main requirements of designing complex systems today is to use module approach. Where designer can reuse modules from previous projects or use IP(Intellectual Property)-core. For SoCs there are bunch of various ready to use processors with peripheral buses and libraries of standard peripherals. With different functionality, sizes, from simple interface to complicated 64bit processors that requires couple of millions transistors.Публікація Design of Wavelet Filter Bank for JPEG 2000 Standard(EWDTW, 2006) Hahanova, I. V.; Fomina, E.; Sorudeykin, K.; Hahanov, V. I.; Bykova, V.Models, method and hardware implementation of lifting-based wavelet filter scheme for JPEG 2000 standard are proposed. JPEG 2000 image compression standard is used for data transmission, print and scan of images, digital photography. Low-pass and highpass filters for implementing JPEG 2000 transformation are described. Obtained results were compared with the same parameter of other discrete wavelet transformation (DWT) devices that were proposed in others references. This work purpose is essential speed growing of the ad hoc pipelining lifting-based DWT hardware implementation. JPEG2000 is new image compression algorithm based on discrete wavelet transformation of input data. This technique is a next development of JPEG group. It could be used for data transmitting in Internet, for image printing and scanning, for digital photography. Transform time reduce due to ad hoc SoC architectures essential increases the device feasibility attractiveness. To achieve this purpose the next challenges have been solved: 1. Digital models and their transformation methods were considered. 2. Lifting-based wavelet transformation hardware architecture was designed. 3. Control algorithm for DWT was created. 4. DWT-device on Xilinx FPGA was implemented. 5. Digital system testing and verification, different device version speeds and SNR were compared. Discrete wavelet transformation device architecture, which doesn’t use external memory that increase the device speed, was developed. Also it allows reducing device cost. IP Core for JPEG2000 encoder/decoder SoC was proposed. In this work it was put emphasis on the fast control block design, not only arithmetic blocks that was made in considering references. It allows increasing speed the whole device. The speed analysis for devices implemented on different Xilinx FPGA series with different memory types and transformation image sizes was done. The device was compared with existing prototypes by speed and area. Scientific novelty is the pipelining DWT device that is intent to use as IP core and implement in programmable chip. The proposed device has more simple ALU part, doesn’t use external memory, and so is faster and chipper than existing analogs. The practical significance is proposition of the simple, high technology and effective DWT device that have high speed and low power consumption. It is its advantage in over software implementation of the IEEE JPEG2000 standard. Further work steps: 1. DWT and IDWT device design for 5/3 and 9/7 JPEG2000 filter banks. 2. DWT and IDWT device implementation using Xilinx Virtex-4 DSP processor.Публікація Hierarchical hybrid approach to complex digital systems testing(EWDTW, 2005) Hahanova, I. V.; Obrizan, V.; Ghribi, W.; Yeliseev, V.; Ktiaman, H.; Guz, O. A.This paper offers approach to complex digital system testing based on hierarchy scaling during diagnosis experiment. Several models of testing are proposed. Main principles of testing system organization are given. Such approach allows significant reducing overall system testing and verification time.Публікація Models for SoC Infrastructure of Radio Frequency Identification with Code-Division Multiple(EWDTS, 2012) Filippenko, I. V.; Hahanova, I. V.; Filippenko, I. O.; Maksimov, M.; Chugurov, I. N.Application of the direct spread spectrum technology to radio frequency identification system for solving problems of speed, reliability and electromagnetic compatibility is considered. The models of systems-on-chips for tag and reader of RFID systems with code-division multiple, based on technology of direct spread spectrum, are proposed. Nowadays, the most promising technology for automatic identification is radio frequency identification (RFID), when data is transferred without any mechanical contact between the devices. Contactless identification technologies correspond to all the requirements of computer control systems, where recognition and registration of objects are realized in real time. Modern infrastructure of RFID systems (Fig. 1) involves readers, tags and various protocols, which allow integrating the system in global information networks and variety of applications of enterprise levels. In addition, the infrastructure can also include other devices, such as bar code readers, input/output devices, for instance label printers, motion detectors, photo sensors. Transponder and reader communicate with each other by using radio frequency channel. Wireless data channel is the most vulnerable link in the system, because the data transmitted by radio channel, may be intercepted. Also the problem of electromagnetic compatibility of various wireless devices is important. This paper is devoted to solving this problem through the creation of models of systems-on-chips for tag and reader by using the direct spread spectrum technology (CDMA).Публікація Quantum Models for Description of Digital Systems(EWDTS, 2013) Hahanov, V. I.; Hahanova, I. V.; Litvinova, E. I.; Priymak, A.; Fomina, E.; Maksimov, M.; Tiecoura Yves; Malek, Jehad Mohammad JararwehQuantum models for description of digital systems and results of studies concerning the models and methods of quantum diagnosis of digital systems, qubit fault simulation and analysis of fault-free behavior are presented. Quantum calculators are effectively used for faulttolerant design and solving optimization problems by way of the brute-force method through the use of set theory. A set of elements in the traditional computer is orderly, because each bit, byte or other component has its own address. Therefore, the settheoretical operations are reduced to exhaustive search of addresses of primitive elements. Address order of data structures useful for applications where model components can be strictly ranked, which makes it possible to carry out their analysis in a single pass (a single iteration). If there is not order in the structure, for example, the set of all subsets, the classical model of memory and computational processes disimprove the analysis time of primitive association equal by the rank, or processing of associative groups is ineffective. What can be offered for unordered data instead of the strict order? Processor, where the unit cell is the image or pattern of the universe of n primitives, which generates nQ 2 = all possible states of a cell as a power set or the set of all subsets. Direct solution about creating such cell is based on unitary positional coding states of primitives that form the set of all subsets and in the limit the universe of primitives by superposition of last ones. History of the issue of the necessity for developing quantum computing on the background of the technological revolution in nano-electronics fit in a few of clear theses: 1) Quantum Computer was created the experts in the field of quantum mechanics and electronics, who introduced the idea of creating a non-numeric analogbased computer. 2) The introduced notion of a qubit corresponds to the power set of primitives, which is the ideal nonnumeric form of object component description for analysis, synthesis and optimization of discrete objects. 3) The forms of qubit representation are the following: 1. The universe of primitive symbols, which generate the set of all subsets (power set). 2. Binary vectors, where the power set is a combination of unit values of primitives. 3. Hasse diagram, which forms the power set of all possible solutions on the graph. 4. Full transition graph, which determines the set of all subsets of transitions in the form of arcs. 5. The geometric representation in a plane for a qubit in the form of points and segments corresponding to the Boolean (power set). 4) In practice, more than 90% of all IT-industry problems associated with information retrieval in cyberspace, pattern recognition and decision-making are related to the field of discrete mathematics, where it is difficult to find a place of numerical arithmetic. 5) It is necessary to create associative logic brainlike parallel (quantum) processors, which effectively use Boolean (qubit) primitives or elements (sets) to solve problems of discrete mathematics. 6) Set-theoretic operations have to be replaced the isomorphic logical instructions (and, or, not, xor) for the subsequent creating a new system of parallel qubit programming to solve logic and optimization problems, based on qubit data structures. 7) Another solution for organization computing is associated with topological representation of the qubit, where the elements are the geometric shapes. 8) Nonnumeric problems, focused to the use of quantum processor are the following: minimization of forms of Boolean functions, when describing complex systems; searching paths in the graph; testing and diagnosis of digital systems; combinatorial studies of processes and phenomena; intelligent data searching, pattern recognition and decision making; discretization of fuzzy models and methods, when creating the intelligence; digital data processing and the developing efficient codec for DSP-devices.Публікація Quantum Technology for Analysis and Testing Computing Systems(EWDTS, 2013) Gharibi, W.; Hahanov, V. I.; Anders, C.; Hahanova, I. V.; Filippenko, I. V.A theory of quantum models, methods and algorithms for improving the performance of existing software and hardware tools for analysis and synthesis of digital computing devices by increasing the dimension of the data structures and memory are proposed. The basic concepts, terminology and definitions are introduced, which are necessary for the understanding the theory and practice of quantum computation. In recent years quantum computing becomes interesting for analyzing cybernetic space, developing cloud Internet technologies, which is explained by their alternativeness to the existing models of computing processes. Market feasability of quantum methods and qubit models is based on the high parallelism when solving almost all discrete optimization problems, factorization, minimization of Boolean functions, effective compression of data, their compact representation and teleportation, fault-tolerant design through significant increase in hardware costs. But now it is acceptable, because of nano-electronic technologies propose now up to 1 billion gates, located on a chip of the dimension 2х2 sm with the substrate thickness 5 microns. At that modern technologies allow creating a package (sandwich) containing up to 7 dies, which is comparable with the quantity of the human brain neurons. Practically wireless connection of such chips is based on through-silicon vias (TSV) - the technological capability of drilling about 10 thousand through vias in 1 square centimeter of wafer or die. In addition, the emergence of FinFET transistors and 3D-technology based on them for implementation of digital systems provide almost unlimited hardware capabilities to researchers for creating new parallel computing devices. So, it is necessary to use hardware-focused models and methods for creating high-speed tools for parallel solving real world problems. Discreteness and multiple-valuedness of the alphabets for describing information processes, the parallelism, inherent in the quantum computing, are particularly important when developing effective and intelligent engines for cyberspace, cloud structures and services of Internet, and tools for synthesis of fault-tolerant digital devices, testing and simulation of digital systems-on-chips, technologies for information and computer security . We do not consider the physical basis of quantum computing, originally described in the works of scientists, focused on the use of non-deterministic quantum interactions within the atom. We do not address the physical foundations of quantum mechanics, concerning non-deterministic interactions of atomic particles, but we use the concept of information quantum as a joint definition of the power set (the set of all subsets) of states for the discrete cyberspace area that provides the high parallelism level of the proposed quantum models and methods.Публікація Qubit Modeling Digital Systems(EWDTS, 2014) Hahanova, I. V.; Emelyanov, I.; Tamer Bani AmerThe data structures, effective from the viewpoint of software or hardware implementation of fault-free interpretative modelling discrete systems described in the form of qubit vectors of primitive output states are considered.Публікація Testing and Verification of HDL-models for SoC components(EWDTS, 2009) Hahanova, I. V.; Hahanov, V.; Ngene, C. U.; Yves, T.The testing and verification technology for system HDL models, focused to the significant improvement of the quality of design components for digital systems on chips and reduction the development time (time-tomarket) by using the simulation environment, testable analysis of the logical structure HDL-program and the optimal placement of assertion engine is proposed. The novel testing and verification technology for system HDL models allows searching for errors in the HDL-code with a given thoroughness for an acceptable time by means of the introduction assertion redundancy to the critical points of the software model, which are defined by the synthesized logic functions of the testability. The controllability and observability criteria, used in hardware design and test, are applied to estimate the quality of software code in order to improve it and effective diagnose semantic errors. The objective is improvement of the testing and verification technology for digital systems to diagnose and correct of errors for HDL-models by sharing of the assertion engine and testable design technologies. The research tasks: 1. Design verification and testing environment for system HDL-model on the basis of assertions. 2. Development of testability evaluation metrics on the basis of new logic testability function. 3. Application of a technological assertion model to verify an IP-core filter on the basis of discrete cosine transform. 4. Practical results and directions for further research. The research sources: 1. Technologies and tools of test and testbench creation are represented in the papers [1-3]. 2. Models and methods for verification of the system models on the basis of assertions are described in [4-7]. Testable software design uses the IEEE standards [8-10], as well as innovative solutions to verify and testability analysis for the system HDLmodels [11-18].Публікація Transaction Level Model of Embedded Processor for Vector-Logical Analysis(EWDTS, 2012) Adamov, A.; Hahanova, I. V.; Obrizan, V.; Shcherbin, D.Transaction level model of embedded processor for improving the performance of logical relation analysis are proposed. It is based on the hardware implementation of vector operations. There are examples of the model using for the semantics analysis of Russian adjectives. The embedded processor was designed to be part of SoC that will be implemented on FPGA. With the increase in complexity and velocity of the modern digital devices its energy consumption and cost gross also. The division of tasks across multiple cores of the processor that leads to create some parallel systems using a coherent set of specialized calculators would be a trade-off in this situation. These structures could improve the performance of solving the computational problems, and could reduce the power consumption and the hardware implementation cost of the digital systems. The special interest of electronic technology market is the scientific and technical direction of formalizing human mental activity to create the artificial intelligence components. These intelligent tools such as expert systems, image recognition and decisionmaking need creating effective and high-speed engines (multi-processor or specialized embedded processors). A typical example of this domain that requires a specialized processor is the analysis and synthesis of the natural language constructs. At the same time one of the main points of designing the word processor is the hardware implementation of the device that handle synthesis and analysis of the language constructs. Purpose: Development of the transaction level model of the special embedded processor for hardware realization of the vector operations. Objectives: 1. Analysis of publications about the specialized logic processor design [1-4]. 2. Analysis of the syntactic and semantic models of word processing that implement for natural languages [5-6]. 3. Creation of the architecture of the specialized embedded processor that analyze the logical net of the language constructs [6]. 4. The hardware implementation of the transaction level model device that implements the grammatical analysis of the Russian adjectives. The prototype design is used the specialized device that performed grammatical analysis of adjectives end was implemented in the FPGA [6]. The proposed model has more flexibility and can handle any logical net of syntactic and semantic relations. The use of the transaction level modes and design techniques allowed to focus on the order of the data processing and transmission, and reduce unimportant details.