Перегляд за автором "Zaychenko, S. A."
Зараз показано 1 - 2 з 2
Результатів на сторінку
Варіанти сортування
Публікація Set operation speed-up of fault simulation(EWDTW, 2004) Zaychenko, S. A.; Parfentiy, A. N.; Kamenuka, E. A.; Ktiaman, H.In this paper there are presented data structures and algorithms for performing set theory operations upon lists of defects within deductive fault simulation method of digital systems. There are suggested 4 types of data structures and calculation procedures, which provide maximum performance for basic operations required for effective software implementation of the method. Hardware designers and manufacturers demand significant performance acceleration for fault simulation and automatic test patterns generation tools (ATPG) [1] for large-scale digital systems, being targeted into the application specific integrated circuits (ASIC’s). Over 50% of existing ATPG systems [1-4] use deductive method of fault simulation to obtain table of faults, covered by the applied test. The performance distribution analysis of computation cycle during test-vector processing within deductive method (fig. 1) shows, that about 70% of time is spent on performing set theory’s operations upon lists of faults: union, intersection and complement (difference). That’s why the software implementation performance of the deductive method strongly depends on implementation efficiency of the set theory operations.Software implementation of the set operations may use classic storage data structures and algorithms, which efficiency differs for various numbers of elements under processing. Relatively to deductive fault simulation method, the particular computations at the same time are performed upon sets with various range of elements number. That’s why, there is no well-known data structure in general programming, which provides acceptable performance of implementation of the set operations for deductive fault simulation method. The research goal is to analyze and select optimal data structures and processing algorithms of set theory operations, that will provide the highest performance and lowest memory usage for software implementation of the deductive fault simulation method. The research tasks include: – analysis of classic data structures, being used in discrete mathematics [5,6] and general programming [7-9] for implementation of set theory operations; – development of the computation strategy, which provides high speed and low memory usage for fault simulation of large-scale digital systems; – efficiency assessment of the developed strategy.Публікація Synthesis of Qubit Models for Logic Circuits(EWDTS, 2012) Zaychenko, S. A.; Gharibi, W.; Dahiri Farid; Hahanova, Yu. V.; Guz, O. A.; Ngene, C. U.; Adiele StanleyQubit (quantum) structures of data and computational processes for significantly improving performance when solving problems of discrete optimization and fault-tolerant design are proposed. We describe superpositional method for synthesizing cube of functionality for its implementation in the structural components of programmable logic chips. The estimates of synthesis time, as well as hardware costs for creating qubit models of logic circuits are represented. Quantum computing becomes interesting for cyberspace analysis, creating new Internet technologies and services, which is explained by their alternative to the existing models of computing processes. Market appeal of quantum (qubit) models is based on the high parallelism when solving almost all discrete optimization problems, factoring, minimization of Boolean functions, effective compression of data, their compact representation and teleportation, fault-tolerant design through significant increase in hardware cost. But now it is acceptable, because there are problems of use silicon chip, which contains up to 1 billion gates on a substrate thickness 5 microns. At that modern technologies allow creating a package (sandwich) containing up to 7 chips, which is comparable with the quantity of the human brain neurons. Practically, through-silicon via (TSV) connection is based on the technological capability of drilling about 10 thousand through vias in 1 square centimeter of wafer or die. Layout the indicated volume of useful functionality on chip is currently problematic. So, it is necessary to develop hardwarefocused models and methods for creating high-speed tools of parallel solving real world problems. Considering the discreteness and multiple-valuedness of the alphabets for description of information processes, the parallelism, inherent in the quantum computing, is particularly actual when developing effective and intelligent engines for cyberspace or Internet, tools for synthesis of fault-tolerant digital primitives and systems, testing and simulation of digital systems-on-chips, technologies for information and computer security, brain-like models for computing, analysis and synthesis of linguistic constructions.