Перегляд за автором "Lukashenko, O."
Зараз показано 1 - 5 з 5
Результатів на сторінку
Варіанти сортування
Публікація Brainlike computing(EWDTW, 2005) Shabanov-Kushnarenko, Yu.; Klimushev, V.; Lukashenko, O.; Nabatova, S.; Obrizan, V.; Protsay, N.This paper offers mathematical foundation of brainlike computer. New approach to making artificial intelligence is discussed: the human intelligence is considered as some material embodiment of the mechanism of logic. Also hardware efficiency of logic net implementation is shown.Urgency of research is determined by the necessity of design of the parallel computer for significant performance increase in comparison with software implementation on von Neumann architectures. The goal of the research – design of the parallel computer operation by principles of a humane brain and designed on modern element base. To reach the goal it is necessary to solve the following tasks: 1) designing the new method of artificial intelligence: the humane intelligence is considered as some material implementation of the mechanism of logic; 2) algebraization of logic; 3) formalization of logic net model; 4) developing logic synthesis procedures for logic net; 5) designing logic net design flow; 6) analysis of hardware implementation efficiency. Quickly progressing computerization and informatization demand constant increase of productivity of electronic computers. However, it is more and more difficult to do it. Reserves of increasing the speed of computing elements of the computer are getting exhausted. There is a way of escalating a number of simultaneously operating elements in the computer processor. Nowadays there is a practical possibility to build computers with the number of elements up to 108, based on successes of microminiaturization and reduction in price of electronic elements and on achievements in the field of automation of design and manufacturing of computers. However, with the present serial computers operation based on the principle of program control by J. von Neumann, it is senseless to do this, as there is only a small number of elements in operation during each period of time in them simultaneously. Attempts of conversion to parallel machines do not provide the expected growth of their productivity. For example, productivity of multiprocessing computers does not grow proportionately to the number of processors available in them as, apparently, it should be, but much slower. There are essential difficulties in attempts of creation of high-efficiency neurocomputers, which are constructed as formal neuron networks. Meanwhile, there is the "computer" created by nature, namely – a human brain for which the problem of high-grade parallelism of information processing is completely solved. Human brain is a slow mover in comparison with the modern computer. Its “clock frequency” can be estimated by throughput of nervous fibers. It is known, that each nervous fiber can pass not more than 103 pulses per a second. Through the conductors of modern computers it can be transferred about 109 pulses per a second. Hence, the computer surpasses a human brain in terms of speed of work of computing elements in 109:103=106 times. And nevertheless, the brain, due to a parallel principle of action, works faster and is capable to solve immeasurably more difficult tasks, than the most powerful modern computers with program control. It is caused by the fact that the human brain incorporates about 1015 computing elements (acted by synapses – interfaces between the ends of nervous fibers), and all of them are operating simultaneously, according to neurophysiology. In serial computers at any moment only the small number of elements operates in parallel.Публікація Early Detection of Potentially Non-synchronized CDC Paths Using Structural Snalysis Technique(EWDTS, 2009) Zaychenko, S.; Melnik, D.; Lukashenko, O.The number of independent clock domains found on the typical today's device is continuously growing. According to the latest industry research, the average number of clock domains on a single device is >15-20 and it becomes higher and higher from day to day. The CDC-related design flaws are also growing exponentially, appearing to be very dangerous as the roots of intermittent chip failures (can be found only in the silicon). Static CDC verification is considered as one of the first de-facto steps in today's SoC design methodology; only static techniques can work as soon as the RTL starts taking shape. This paper discusses early detection of potentially missing synchronizers on clock domain crossing paths, using structural static analysis. The sections of logic elements that driven by clocks coming from different sources are called clock domains. The signals that interface between asynchronous clock domains are called the clock domain crossing (CDC) signals. The DATA_A signal is considered as an asynchronous signal into the receiving clock domain (no constant phase and time relationship exists between CLK_A and CLK_B).The nature of CDC bugs is intermittent; it simply means that a test suite can be successfully completed on a chip in the morning, but the same tests will complete with errors for the same chip in the afternoon. Consider the simplest flip-flop example: such a flip-flop is located anywhere in the chip; the data signal for this flip-flop comes from the domain #A but the clock signal — from the domain #B... so whenever the setup or hold condition is violated, the flip-flop can go to one or to zero and it cannot be predicted.Публікація Structural Analysis Technique and Bad Synchronization Styles(ХНУРЭ, 2009) Melnik, D.; Lukashenko, O.This paper discusses early detection of potentially missing synchronizers on clock domain crossing paths, using structural static analysis.Публікація System Level Methodology for Functional Verification Soc(EWDTW, 2006) Adamov, A.; Zaychenko, S.; Myroshnychenko, Y.; Lukashenko, O.Building a verification environment and the associated tests is a highly time-consuming process. Most project reports indicate that between 40% and 70% of the entire effort of a project is spent on verification, with 70% being much closer to the normal level for successful projects. This high level of effort indicates that the potential gains to be made with successful re-use are significant. Most projects do not start with a complete set of hardware designs available for a functional verification. Usually a design comes together as smaller blocks. Then the blocks are integrated into larger blocks, which may eventually be integrated into a system. That is reason for performing functional verification at a system level. The paper describes the system-level modeling environment for a functional verification System-on-a-Chip models. System level allow design teams to rapidly create large system-on-a-chip designs (SOCs) by integrating premade blocks that do not require any design work or verification. One of the hottest topics in embedded system design today is Electronic System Level (ESL) design. Although the idea of being able to describe a system at an abstract level has been around for a decade, only now are various parts of the design flow becoming available to make it practical. ESL describes a Systemon- chip (SoC) design in an abstract enough and fast enough way to explore the design space and provide virtual prototypes for hardware and software implementation. It is becoming a fundamental part of the design flow because we can now use it throughout the iterative design process rather than just in the early system architecting phase. ESL provides tools and methodologies that let designers describe and analyze chips on a high level of abstraction, easing the pain of designing electronic systems which would otherwise be too costly, complex or time consuming to create. The adoption of ESL can be seen in the same light as the transition to register transfer level (RTL) methodologies 10-15 years ago when complexity and time-to-market pressures obliged the industry to step up to another design level. As designs become larger with more and more IP blocks, engineers will re-use more IP. ESL methodologies that enable platform-based design will be increasingly necessary to create and test a complete system. For the most complex SoCs, IP reuse can only help up to a point. For a 40-million-gate SoC, filling even 75% of the device with existing IP leaves 10 million gates to design with original content. ESL methodologies which allow rapid creation of new blocks are likely to be leveraged by designers to quickly develop and verify original content to fill the 10 million gate void while meeting time-to- market requirements. Among the 24% percent of respondents who have implemented some form of ESL design methodology an overwhelming 87% believe ESL provides an acceptable or greater return on investment.Публікація Verification Challenges of Clock Domain Crossings(EWDTS, 2008) Zaychenko, S.; Melnik, D.; Lukashenko, O.This paper discusses typical verification problems occurring within SoC design cycle when multiple clock domains are involved. Critical cases leading to unpredictable SoC behavior during data transfer across clock domains are identified and described. A principle for metastability modeling is suggested. Only the most elementary logic circuits use a single clock. Today’s system-on-chips (SoC) have dozens of asynchronous clocks. There are a lot of software programs to assist in creating of multimillion-gate ASIC/FPGA circuits, but designer still has to know reliable design techniques to reduce the risk of CDCrelated design re-spins. Moreover, the most relevant literature does not cover CDC-related issues and approaches to prevent appropriate costly silicon bugs.