Перегляд за автором "Obrizan, V."
Зараз показано 1 - 6 з 6
Результатів на сторінку
Варіанти сортування
Публікація A Method of High-Level Synthesis and Verification with SystemC Language(ХНУРЭ, 2010) Obrizan, V.This paper presents a method for automatic RTL-interface synthesis for a given C++ function as well as for a given SystemC-interface. This task is very im-portant in High-Level Synthesis design flow where design entry is usually done in some abstract language (e.g. C++). As a source high-level description targets different SoC architectures or protocols, so it is needed to generate relevant pin-level interfaces and protocols automatically.Публікація Brainlike computing(EWDTW, 2005) Shabanov-Kushnarenko, Yu.; Klimushev, V.; Lukashenko, O.; Nabatova, S.; Obrizan, V.; Protsay, N.This paper offers mathematical foundation of brainlike computer. New approach to making artificial intelligence is discussed: the human intelligence is considered as some material embodiment of the mechanism of logic. Also hardware efficiency of logic net implementation is shown.Urgency of research is determined by the necessity of design of the parallel computer for significant performance increase in comparison with software implementation on von Neumann architectures. The goal of the research – design of the parallel computer operation by principles of a humane brain and designed on modern element base. To reach the goal it is necessary to solve the following tasks: 1) designing the new method of artificial intelligence: the humane intelligence is considered as some material implementation of the mechanism of logic; 2) algebraization of logic; 3) formalization of logic net model; 4) developing logic synthesis procedures for logic net; 5) designing logic net design flow; 6) analysis of hardware implementation efficiency. Quickly progressing computerization and informatization demand constant increase of productivity of electronic computers. However, it is more and more difficult to do it. Reserves of increasing the speed of computing elements of the computer are getting exhausted. There is a way of escalating a number of simultaneously operating elements in the computer processor. Nowadays there is a practical possibility to build computers with the number of elements up to 108, based on successes of microminiaturization and reduction in price of electronic elements and on achievements in the field of automation of design and manufacturing of computers. However, with the present serial computers operation based on the principle of program control by J. von Neumann, it is senseless to do this, as there is only a small number of elements in operation during each period of time in them simultaneously. Attempts of conversion to parallel machines do not provide the expected growth of their productivity. For example, productivity of multiprocessing computers does not grow proportionately to the number of processors available in them as, apparently, it should be, but much slower. There are essential difficulties in attempts of creation of high-efficiency neurocomputers, which are constructed as formal neuron networks. Meanwhile, there is the "computer" created by nature, namely – a human brain for which the problem of high-grade parallelism of information processing is completely solved. Human brain is a slow mover in comparison with the modern computer. Its “clock frequency” can be estimated by throughput of nervous fibers. It is known, that each nervous fiber can pass not more than 103 pulses per a second. Through the conductors of modern computers it can be transferred about 109 pulses per a second. Hence, the computer surpasses a human brain in terms of speed of work of computing elements in 109:103=106 times. And nevertheless, the brain, due to a parallel principle of action, works faster and is capable to solve immeasurably more difficult tasks, than the most powerful modern computers with program control. It is caused by the fact that the human brain incorporates about 1015 computing elements (acted by synapses – interfaces between the ends of nervous fibers), and all of them are operating simultaneously, according to neurophysiology. In serial computers at any moment only the small number of elements operates in parallel.Публікація Hierarchical hybrid approach to complex digital systems testing(EWDTW, 2005) Hahanova, I. V.; Obrizan, V.; Ghribi, W.; Yeliseev, V.; Ktiaman, H.; Guz, O. A.This paper offers approach to complex digital system testing based on hierarchy scaling during diagnosis experiment. Several models of testing are proposed. Main principles of testing system organization are given. Such approach allows significant reducing overall system testing and verification time.Публікація Logic and Fault Simulation Based on Shared-Memory Processors(EWDTW, 2006) Obrizan, V.; Shipunov, V.; Gavryushenko, A.; Kashpur, O.Existing software in Electronic Design Automation shows lack of dual-core processors support. As a result, we see bad processing resources utilization. This work-in-progress is devoted to exploration of existing approaches to parallel logic and fault simulation on dual-core workstations. The scale of modern digital system-on-chips continuously increases the complexity of testing during design and manufacturing. It makes the problem of fault simulation and automatic test pattern generation more and more relevant. The performance of fault and fault-free simulation software and the speed of workstations grow noticeably slower, than the structural and functional complexity of digital systems, or the verification cost. In the era of embedded systems, it is easy to create complex devices using system-level approach, but at the same time it is hard to simulate, verify and test such devices. Previously, engineers used highperformance workstations to reduce simulation run time. But nowadays, microprocessors frequencies stop rising, and to solve performance problems, computers enter an era of a multi-core processing. Multiprocessors came to home and office desktops, not only to supercomputer centers. Thus, GHzs don’t determine the performance of the workstation anymore. Also it’s well known, that single-threaded application or serial algorithm (even best optimized for serial processing) shows no expected acceleration on multi-processor systems. In the present days, each application must be designed to gain maximum performance of multi-core architectures. This statement is a baseline of the proposed research. The goal of research – reduce simulation run-time using efficient shared-memory processing. Research tasks: 1) analyze existing algorithms and software products on the subject of serial and parallel data processing; 2) develop parallel algorithms for efficient shared memory utilization; 3) develop software implementation and conduct verification and testing.Публікація Testing Challenges of SOC Hardware-Software Components(EWDTS, 2008) Hahanov, V. I.; Obrizan, V.; Miroshnichenko, S.; Gorobets, A.Innovative testable design technologies of hardware and software, which oriented on making graph models of SoC components for effective test development and SoC component verification, are considered. Adaptation of testing and verification methods of digital systems can bring in big financial and time dividends, when using for testable design and diagnosis of software. Consideration of the following questions can be interesting: 1. Classification of key uses of SoC testable design technologies in software testing and verification problems. 2. Universal model of hardware and software component in the form of directed register transfer and control graph, on which the testable design, test synthesis and analysis problems can be solved. 3. Metrics of testability (controllability and observability) evaluation for hardware and software by the graph register transfer and control model. The silicon chip that is basis of computers and communicators development has to be considered as the initiate kernel of new testing and verification technologies appearance in software and computer engineering. A chip is used as test area for new facilities and methods creation and testing for component routing, placement, synthesis and analysis. Technological solutions, tested by time in microelectronics, then are captured and implemented into macroelectronics (computer systems and networks). Here are some of artifacts, relating to the continuity of technological innovations developmentПублікація Transaction Level Model of Embedded Processor for Vector-Logical Analysis(EWDTS, 2012) Adamov, A.; Hahanova, I. V.; Obrizan, V.; Shcherbin, D.Transaction level model of embedded processor for improving the performance of logical relation analysis are proposed. It is based on the hardware implementation of vector operations. There are examples of the model using for the semantics analysis of Russian adjectives. The embedded processor was designed to be part of SoC that will be implemented on FPGA. With the increase in complexity and velocity of the modern digital devices its energy consumption and cost gross also. The division of tasks across multiple cores of the processor that leads to create some parallel systems using a coherent set of specialized calculators would be a trade-off in this situation. These structures could improve the performance of solving the computational problems, and could reduce the power consumption and the hardware implementation cost of the digital systems. The special interest of electronic technology market is the scientific and technical direction of formalizing human mental activity to create the artificial intelligence components. These intelligent tools such as expert systems, image recognition and decisionmaking need creating effective and high-speed engines (multi-processor or specialized embedded processors). A typical example of this domain that requires a specialized processor is the analysis and synthesis of the natural language constructs. At the same time one of the main points of designing the word processor is the hardware implementation of the device that handle synthesis and analysis of the language constructs. Purpose: Development of the transaction level model of the special embedded processor for hardware realization of the vector operations. Objectives: 1. Analysis of publications about the specialized logic processor design [1-4]. 2. Analysis of the syntactic and semantic models of word processing that implement for natural languages [5-6]. 3. Creation of the architecture of the specialized embedded processor that analyze the logical net of the language constructs [6]. 4. The hardware implementation of the transaction level model device that implements the grammatical analysis of the Russian adjectives. The prototype design is used the specialized device that performed grammatical analysis of adjectives end was implemented in the FPGA [6]. The proposed model has more flexibility and can handle any logical net of syntactic and semantic relations. The use of the transaction level modes and design techniques allowed to focus on the order of the data processing and transmission, and reduce unimportant details.