Кафедра автоматизації проектування обчислювальної техніки (АПОТ)
Постійний URI для цієї колекції
Перегляд
Перегляд Кафедра автоматизації проектування обчислювальної техніки (АПОТ) за назвою
Зараз показано 1 - 20 з 537
Результатів на сторінку
Варіанти сортування
Публікація A Diagnostic Model for Detecting Functional Violation in HDL-Code of System-on-Chip(EWDTS, 2011) Umerah, N. C.; Hahanov, V. I.The problem of synthesis or analysis of the components of any system can be formulated by the interaction of its model with input patterns and reactions in a space. This is similar to determining the symmetric difference of the three components – model, test patterns and reaction of the model when the test pattern is applied to it. The distance or relationship between two or more objects in a space can be determined by the well known Cartesian or polar coordinate systems. In case of Boolean variables the Hamming distance has been used to determine how close or far apart are two binary variables of any length. Using Hamming distance results in cardinality or a number; but the beta metric proposed in this paper gives a broader view of how two or more binary variables of any length relates to each other in a cyberspace. And Hamming distance is only a particular case of beta metric. In this paper we define cyberspace as a set of interacting information processes and phenomena which conforms to a predefined metric using computer systems and networks as a vehicle. This paper is organised as follows: In section 2 we discuss the beta metric used in defining the relationship between objects in a space. The analysis of interactive graph of components of technical diagnostics is presented in section 3 followed by a model to search for functional violation in HDLcode in section 4.Публікація A Security Model of Individual Cyberspace(EWDTS, 2011) Adamov, A.; Hahanov, V.Previous studies in the field of cyberspace security were mostly based on an analysis of a computer network state and identifying vulnerabilities in it [1], or using as security criterion multi-perspective parameters to assess and predict a security state of a network system [2]. In later studies, this approach has been recognized as untenable because it ignores the behaviour of a user when a system anomaly occurs. According to [3] cyberspace is defined as "a massive socio technical system of systems, with a significant component being the humans involved". Thus, the authors attribute the cyber attacks with social, political, economic and cultural phenomena. Today an individual virtual space expanded by the widespread expansion of social networking and Internet services that allow process and store data in the cloud. Thus, users are becoming less tied to their personal digital device, which is only used for access to online services to obtain the necessary data and perform operations. This approach allows us to abstract from hardware characteristics accessing the Internet and use any mobile hardware and software platform for a wide range of tasks in the "cloud" [4]. The examples of such services are cloud office (Google Documents, Microsoft Office Live), sharing of files and images, map services, interpreters, calendars, and, finally, social networks, where each member of a network can store personal information and gain access to multimedia content of other users. All these are evidence of humanity's transition to cloud technology everywhere. The protection of cloud services is hot topic today because these technologies are widely used by organizations to create a business service infrastructure. Accordingly, it is necessary to guarantee the security of corporate data in the cloud, which is an elusive task. Solving this task a company may sign Service Level Agreement (SLA) with a service provider, where all security issues are determined at different levels of representation [5]. For instance, Intel has developed a suite of solutions for secure access and data storage in the cloud. Intel's technologies are supported by leading antivirus companies Symantec and McAfee [6]. Taking in consideration the existing technologies in this area a new model of ICS protection is suggested, which implies the creation of a secure environment for data storage and processing with the help of a cloud computing technology.Публікація A WSN Approach to Unmanned Aerial Surveillance of Traffic Anomalies: Some Challenges and Potential Solutions(EWDTS, 2012) David Afolabi; Ka Lok Man; Hai-Ning Liang; Eng Gee Lim; Zhun Shen; Chi-Un Lei; Tomas Krilavičius; Yue Yang; Lixin Cheng; Hahanov, V. I.; Yemelyanov, IgorStationary CCTV cameras are often used to help monitor car movements and detect any anomalies— e.g., accidents, cars going faster than the allowed speed, driving under the influence of alcohol, etc. The height of the cameras can limit their effectiveness and the types of image processing algorithm which can be used. With advancements in the development of inexpensive aerial flying objects and wireless devices, these two technologies can be coupled to support enhanced surveillance. The flying objects can carry multiple cameras and be sent well above the ground to capture and feed video/image information back to a ground station. In addition, because of the height the objects can achieve, they can capture videos and images which could lend themselves more suitably for the application of a variety of video and image processing algorithms to assist analysts in detecting any anomalies. In this paper, we examine some main challenges of using flying objects for surveillance purposes and propose some potential solutions to these challenges. By doing so, we attempt to provide the basis for developing a framework to build a viable system for improved surveillance based on low-cost equipment. With the cost of cars decreasing, more and more people are opting to use cars as their main means of transportation. In cities with large populations, the exponential rise in the number of cars on the streets can lead to many issues (e.g., accidents, congestions, etc.). Governments are spending large amounts of resources in order to improve means to help monitor the movement of cars and in the process enable enforcement officers detect any existing anomalies and prevent potential ones. One widespread technology used to monitor the flow of cars is CCTVs. These can be seen placed on top of street light posts, traffic lights and/or specialized street structures. Although useful, these types of structures are limited in their height, and this limitation can constraint severely the kinds of images and videos can be captured. Similarly, the type of images and videos can determine to a large extent how well they support computer vision and image analysis algorithms. We believe that the use of unmanned flying (or aerial) vehicles (UAV) embedded with video cameras and wireless devices to be used in conjunction with normal CCTVs can support enhanced monitoring of car movements. Unmanned flying objects have become inexpensive and so have video cameras and wireless devices. In this paper, we explore some challenges of using these technologies for automatic monitoring of car flows and suggest some potential solutions for researchers to consider.Публікація Algebra-Logical Repair Method for FPGA Logic Blocks(EWDTS, 2009) Hahanov, V.; Galagan, S.; Olchovoy, V.; Priymak, A.At present there are many scientific publications, which cover SoC/SiP testing, diagnosis and repair problems [1-16, 19-20]. The testing and repair problem for the digital system logic components has a special place, because repair of faulty logic blocks is technologically complicated problem. Existing solutions, which are proposed in published works, can be divided on the following groups: 1. Duplication of logic elements or chip regions to double hardware realization of functionality. When faulty element is detected switching to faultless component by means of a multiplexer is carried out [4]. The FPGA models, proposed by Xilinx, can be applied for repair of Altera FPGA components. At repair the main unit of measure is row or column. 2. Application of genetic algorithms for diagnosis and repair on the basis of off-line FPGA reconfiguration not using external control devices [5]. The fault diagnosis reliability is 99%, repair time is 36 msec instead of 660 sec, required for standard configuration of a project. 3. Time-critical FPGA repairing by means of replacement of local CLBs by redundant spares is proposed in [6,7]. In critically important applications the acceptable integration level for CLB replacement is about 1000 logic blocks. The repair technologies for digital system logic, implemented on-chip FPGA, are based on existence or introduction of LUT redundancy after place and route procedure execution. Physical faults, which appear in the process of fabrication or operation, become apparent as logical or temporary failure and result in malfunction of a digital system. Faults are tied not only to the gates or LUT components but also to a specified location on a chip. The idea of digital system repairing comes to the removal of a fault element by means of repeated place and route executing after diagnosis. At that two repair technologies are possible: 1) Blockage of a defective area by means of developing the control scripts for long time place and route procedure. But it is not always acceptable for real time digital systems. The approach is oriented to remove the defective areas of any multiplicity. Blockage of the defective areas by means of repeated place and route executing results in repair of a digital system. 2) Place and route executing for repairing of real time digital systems can result in disastrous effects. The technological approach is necessary that allows repairing of the digital system functionality for milliseconds, required for reprogramming FPGA by new bitstream to remove defective areas from chip functionality. The approach is based on preliminary generation of all possible bitstreams for blocking future defective areas by means of their logical relocation to the redundant nonfunctional chip area. The larger a spare area the less a number of bitstreams, which can be generated a priori. Concerning multiple faults, not covered by a spare area, it is necessary to segment a digital project by its decomposition on disjoin parts, which have their own Place and Route maps. In this case a digital system that has n spare segments for n distributed faults can be repaired. The total chip area consists of (n+m) equal parts. The research objective is to develop a repair method for FPGA logic blocks on the basis of using the redundant chip area. Problems: 1) Development of an algebra-logical repair method for logic blocks of a digital system on basis of FPGA. 2) Development of a method for logic blocks matrix traversal to cover FPGA faulty components by spare tiles. 3) Analysis of practical results and future research.Публікація Algebra-Logical Repair Method for FPGA Logic Blocks(ХНУРЭ, 2009) Hahanov, V. I.; Gharibi, W.; Guz, O. A.; Litvinova, E. I.An algebra-logical repair method for FPGA functional logic blocks on the basis of solving the coverage problem is proposed. It is focused on implementation into Infrastructure IP for system-on-a chip and system-in-package. A method is designed for providing the operability of FPGA blocks and digital system as a whole. It enables to obtain exact and optimal solution associated with the minimum number of spares needed to repair the FPGA logic components with multiple faults.Публікація Analysis of Production Rules in Expert Systems of Diagnosis(Української державної академії залізничного транспорту, 2013) Krivoulya, G. F.; Shkil, A. S.; Kucherenko, D. E.This paper examines the problem of the quality test of production rules that is basic for judgment about the technical state of a computer system. The object of the diagnosis is software. Its quality is assessed on the basis of an expert appraisal of the chosen attributes (diagnostic features) with the use of rules and procedures of fuzzy logic. The developed formal procedures to check the produc tion rules for correctness by analysis of the cubic form of their presentation on the basis of the proposed alphabet and procedures are given.Публікація Analysis of the state diagram correctness of automatic logic control systems on FPGA paper(2019) Shkil, O. S.; Rakhlis, D. Y.; Kulak, E. M.; Filippenko, I. V.; Miroshnyk, M. M.; Hoha, M. V.The work is dedicated to verification of automatic logic control systems by analyzing the correctness of state diagrams of control finite state machines which are represented in the form of the code in the hardware description language. As a method for state diagram analysis the, it is proposed to use the concept of orthogonality, as a system of incompatible events. Analysis of the correctness is carried out by analysis the results of behavioral modeling and logical synthesis using CAD tools.Публікація Analyzing the ways of matching dynamic features of video stream to information and communication networks(Kharkiv National University of Radio Electronics, 2016) Barannik, D. V.; Boreiko, O.; Suprun, O.Публікація Applied aspects of information technology(2020) Shevchenko, O. Y.; Beskorovainyi, V.; Petryshyn, L. B.The effectiveness of man-made objects that are used in various spheres of human activity, is largely determined by the decisions taken in the course of their design [1-3]. The design process involves the iterative solution of a set of structural problems, topological, parametric, process optimization in the conditions of incomplete information for a variety of functional and cost indicators (performance criteria). Choosing the best solutions from a variety of effective only in the simplest cases can be the decision maker [6-10]. Because of the combinatorial nature of most tasks synthesis number of alternative solutions dramatically increases with the dimension of design problems. The vast majority of options is ineffective (dominated). Each of these options can be improved on the set of feasible solutions at the same time in all respects. There arises the problem of forming only efficient subset (unimprovable Pareto-optimal) design decisions constituting the plurality of compromises or selection of a subset on the created set of feasible embodiment [11-12]. In addition, for many contemporary design objects generated or selected subset of the effective options can be quite large, unsuitable for the final expert evaluation and selection. This leads to the need to reduce the set of effective options based on a programmed preference between quality indicators.Публікація ARDUINO – універсальна платфома для створення розумного будинку(ХНУРЕ, 2020) Шостак, М. В.Smart homes allow you to forget about the many technical aspects of everyday life. Ready-made solutions are presented on the market, but such systems are not always suitable for realizing the tasks that we would like to see. But there is a more flexible alternative that allows you to create a smart home with your own hands on Arduino. It is this system that allows you to translate any creative thought into an automated process.Публікація Assertion Based Method of Functional Defects for Diagnosing and Testing Multimedia Devices(EWDTS, 2012) Hahanov, V.; Mostova, K.; Paschenko, O.Essential increase of consumer requirements for complex electronic devices leads to substantial growth of complexity for HW and SW components, services, and system interfaces. Such tendency increases the importance to provide high quality for HW, SW, and networking components and services. Well known rule of ten for hardware components stating that fault detection cost increases in ten times on the next following design or manufacturing stages. The same rule is effectively applicable for Software design stages.One of the main goals which comes to the foreground of industry is to decrease the cost of exploitation by creating the standardized infrastructures for maintenance which providing service exploitation, testing, disposal and, elimination of functional defects. Nowadays fast growing complexities of hardware is transforming this rule into rule of twenty which makes even more important to detect the fault on early design stages, rather then on chip/PCB manufacturing, or system assembling stages [1]. Goal of this work is to develop method which increases product quality by means of developing sufficient HW/SW test and diagnosis approach, also decreasing faults detection and defects localization time in order to improve system performance on example of multimedia devices.Публікація Assertions based verification for systemc(EWDTW, 2005) Forczek, M.; Zaychenko, S.The Assertions Based Verification (ABV) has gained worldwide acceptance as verification methodology of electronic systems designs. There was number of papers [1-3] that explain in-depth this methodology. The original concept of assertion comes from software development where it (in particular the assert() macro defined in C language [4]) has proved to be a very powerful tool for automatic bug and regression detection [5]. Assertions for hardware designs employ Linear Time Logic (LTL) to define expected and/or forbidden behavior. The foundation for ABV are Hardware Verification Languages (HVLs). HVLs combine semantics of LTL with constructs for building reusable verification IP units. Verification IP units need to be bind to some design for effective use. Thus HVLs provide constructs to specify connections with models in Hardware Description Languages (HDLs). Most of ABV implementations are part of HDL–based integrated design environments (IDEs). The SystemC open initiative [6] provides an alternative to HDLs as it enables C++ [7] – the industry strength notation for complex systems – with hardware concepts of RTL and system-level in form of C++ templates library. In its original approach SystemC models are processed standard C++ toolset and executed as standalone applications. SystemC became a very popular environment for modeling at system-level abstraction. The HDL-based IDEs offer co-simulation capabilities with SystemC engine but it still remain external unit to the HDL simulator. The idea of applying ABV to the SystemC designs is natural step of HDL and SystemC environments integration. Since HDL design can be co-simulated with SystemC model, there is an easy way to associate verification unit with SystemC one: the SystemC unit needs to be connected to HDL wrapper unit that will provide entry point for verification unit bind. This method doesn’t require any additional tools assuming availability of HDL simulator.Публікація Assertions-based mechanism for the functional verification of the digital designs(EWDTW, 2005) Hahanov, V. I.; Yegorov, O.; Zaychenko, S.; Parfentiy, A.; Kaminska, M.; Kiyaschenko, A. V.According to [1] the verification cost of the digital devices, designed on the base of ASIC, IP-core, SoC technologies, takes up to 70% of the overall design cost. Similarly, up to 80% of the project source code implements a testbench. Reducing these two mentioned parameters minimizes timeto-market, and this is one of the main problems for the world-leading companies in the area of Electronic Design Automation (EDA). The goal of the verification tasks is to eliminate all design errors as early as possible to meet the requirements of the specification. Passing the error through the subsequent design stages (from a block to a chip, and later to a system) each time increases the cost of it’s elimination. Validation – a higher-level verification model – confirms the correctness of the project against the problems in the implementation of the major specified functionality. The goal of this paper is to noticeably decrease the verification time by extending the design with software-based redundancy – the assertions mechanism [2-5], which allows to simply analyze the major specified constraints during the device simulation process and to diagnose the errors in case of their detection. To achieve the declared goal it is necessary to solve the following problems: 1. To formalize the assertions-based product verification process model. 2. To develop the software components for synthesis and analysis of the assertions for the functionality, blocks and the entire system. 3. To get experimental confirmation of the benefits from using assertions to reduce time-to-market or, in other words, to noticeably reduce verification and overall design time.Публікація Big Data: проблемы, методы анализа, алгоритмы(Харьковский национальный университет радиоэлектроники, 2017) Магеррамов, З. Т.; Абдулаев, В. Г.; Магеррамова, А. З.Публікація Brain-Like Computer Structures(ХНУРЭ, 2009) Hahanov, V.; Chumachenko, S. V.; Umerah, N. C.; Yves, T.High-speed multiprocessor architecture for brain-like analyzing information represented in analytic, graph- and table forms of associative relations to search, recognize and make a decision in n-dimensional vector discrete space is offered. Vector-logical process models of actual applications, where the quality of solution is estimated by the proposed integral non-arithmetical metric of the interaction between binary vectors, are described.Публікація Brainlike computing(EWDTW, 2005) Shabanov-Kushnarenko, Yu.; Klimushev, V.; Lukashenko, O.; Nabatova, S.; Obrizan, V.; Protsay, N.This paper offers mathematical foundation of brainlike computer. New approach to making artificial intelligence is discussed: the human intelligence is considered as some material embodiment of the mechanism of logic. Also hardware efficiency of logic net implementation is shown.Urgency of research is determined by the necessity of design of the parallel computer for significant performance increase in comparison with software implementation on von Neumann architectures. The goal of the research – design of the parallel computer operation by principles of a humane brain and designed on modern element base. To reach the goal it is necessary to solve the following tasks: 1) designing the new method of artificial intelligence: the humane intelligence is considered as some material implementation of the mechanism of logic; 2) algebraization of logic; 3) formalization of logic net model; 4) developing logic synthesis procedures for logic net; 5) designing logic net design flow; 6) analysis of hardware implementation efficiency. Quickly progressing computerization and informatization demand constant increase of productivity of electronic computers. However, it is more and more difficult to do it. Reserves of increasing the speed of computing elements of the computer are getting exhausted. There is a way of escalating a number of simultaneously operating elements in the computer processor. Nowadays there is a practical possibility to build computers with the number of elements up to 108, based on successes of microminiaturization and reduction in price of electronic elements and on achievements in the field of automation of design and manufacturing of computers. However, with the present serial computers operation based on the principle of program control by J. von Neumann, it is senseless to do this, as there is only a small number of elements in operation during each period of time in them simultaneously. Attempts of conversion to parallel machines do not provide the expected growth of their productivity. For example, productivity of multiprocessing computers does not grow proportionately to the number of processors available in them as, apparently, it should be, but much slower. There are essential difficulties in attempts of creation of high-efficiency neurocomputers, which are constructed as formal neuron networks. Meanwhile, there is the "computer" created by nature, namely – a human brain for which the problem of high-grade parallelism of information processing is completely solved. Human brain is a slow mover in comparison with the modern computer. Its “clock frequency” can be estimated by throughput of nervous fibers. It is known, that each nervous fiber can pass not more than 103 pulses per a second. Through the conductors of modern computers it can be transferred about 109 pulses per a second. Hence, the computer surpasses a human brain in terms of speed of work of computing elements in 109:103=106 times. And nevertheless, the brain, due to a parallel principle of action, works faster and is capable to solve immeasurably more difficult tasks, than the most powerful modern computers with program control. It is caused by the fact that the human brain incorporates about 1015 computing elements (acted by synapses – interfaces between the ends of nervous fibers), and all of them are operating simultaneously, according to neurophysiology. In serial computers at any moment only the small number of elements operates in parallel.Публікація Cascade Structural Encoding of Binary Arrays(EWDTS, 2008) Barannik, Vlad.; Hahanova, A. V.The lacks of existent approaches come to light in relation to the compression of binary data for their use in digital diagnostics. It is grounded, that due to presentation of binary array as integral structure as a cascade structural number, satisfying limits on the number of carouses of units and on the dynamic range of cod-numbers of OFSN additional reduction of structural surplus is provided. The basic stages of binary of a two stage cascade structural encoding are expounded. Proved, that amount of digits on presentation of binary column examined as an element of cascade structural number less than, than amount of digits on presentation of that column, but examined as an onedimensional floating structural number. The features of treatment of diagnostic information consist of that: on treatment test sets, having an arbitrary structure and different statistical descriptions, are given; test information appears in a binary kind; the lead through of diagnostics of digital charts is carried out on the basis of the kept test sets. Digital diagnostics means related to the increase of volumes of test information. From here is the scientifically-applied task, consisting of diminishing of volumes of test information.Публікація Cloud Infrastructure for Car Service(EWDTS, 2013) Litvinova, E. I.; Englesy, I. P; Miz, V. A; Shcherbin, D.A set of innovative scientific and technological solutions, including for solving social, human, economic and environmental problems associated with creation and use of a cloud for monitoring and management is developed. All of these technologies and tools are integrated into the automaton model of real-time interaction between monitoring and management clouds, vehicles and road infrastructure. Each car has a virtual model in a cyberspace - an individual cell in the cloud, which is invariant with respect to drivers of vehicles. Where does it go real cyber world? Corporate networks, personal computers, as well as individual services (software), go to the "clouds" of a cyberspace, which have an obvious tendency to partition the Internet for specialized services, Fig.1. If today 4 billion users are connected in the Internet (1 zettabytes = 7021 210 = bytes) by means of 50 billion gadgets, in five years each active user will have at least 10 devices for connecting in cyberspace. Use of personal computers without replicating data to all devices becomes impossible. But even simple copying requires more non-productive time for servicing systems and projects, which can reach 50% if several devices or servers with identical functions are available. Unprofessional (bad) service of such equipment creates problems reliable data retention, as well as unauthorized access. Also, there is a problem of remote access to the physical devices when migrating users in the space, and obtaining the necessary services and information from gadgets left at home or in the office is difficult. Economic factor of effective use of purchased applications installed in gadgets and personal computers, force the user to give up their purchase in favor of almost rent free services in the clouds. All of the above is an important argument and undeniable evidence of imminent transition or the outcome of all mankind to cyberspace of virtual networks and computers, located in reliable service clouds. Advantages of the virtual world lie in the fact that the micro-cells and macro-networks in the clouds are invariant with respect to numerous gadgets of each user or corporation. Cloud components solve almost all of the above problems of reliability, safety, service and practically don't have disadvantages. So far as the corporations and users go to the clouds, protection of information and cyber components from unauthorized access, destructive penetrations and viruses is topical and market appealing problem. It is necessary to create a reliable, testable and protected from the penetrations cyberspace infrastructure (virtual PCs and corporate networks), similar to currently available solutions in the real cyber world. Thus, each service being developed in the real world should be placed in the appropriate cloud cell that combines components similar in functionality and utility. The above applies directly to the road service, which has a digital representation in cyberspace for subsequent modeling all processes on the cloud to offer every driver quality conditions of movement, saving time and money. The goal of the project is improving the quality and safety of traffic through creating intelligent road infrastructure, including clouds of traffic monitoring and quasi-optimal motion control in real-time by using RFID-passports of vehicles, which allow minimizing the time and costs of traffic management and creating innovative scientific and technological solutions of social, humanitarian, economic and environmental problems of the world.Публікація Cloud Traffic Control System(EWDTS, 2013) Ziarmand, A.; Hahanov, V. I.; Guz, O. A.; Ngene, C. U.; Arefjev, A.A cloud service “Green Wave” (the intellectual road infrastructure) is proposed to monitor and control traffic in real-time through the use of traffic controllers, RFID cars, in order to improve the quality and safety of vehicle movement, as well as for minimization the time and costs when vehicles are moved at the specified routes. The evolution of cyber world is divided into the following periods: 1) the 1980s - formation of personal computers; 2) the 1990s - the introduction of Internet technologies in production processes and people's lives; 3) the 2000s - improving the quality of life through the introduction of mobile devices and cloud services, and 4) the 2010s - the creation of a digital infrastructure for monitoring and control of moving objects (air, sea, ground transportation, and robots). Therefore, in the present market feasible problem is the system integration of monitoring-control cloud service and transport RFID blocks as well as digital tools of road infrastructure for optimal on-line vehicle and traffic control in order to address the social, human, economic and environmental problems. What is the basic of the world cyberspace? – The silicon chip and its analogs. Modern microelectronics enables to create not flat but three-dimensional transistor structures (3D – FinFETs) in 14 nm range, commensurate with the size of the atom. This means the appearance in the near future 3D-System-on-Chip instead of flat structures or system-in-package. The advantages of the chips significantly affect the characteristics of industrial products in terms of: energy consumption, dimensions, performance, cost and quality due to reducing not only the dimension of the components, but also the relationships between them. Thus there arise problems associated with heat removal from the internal area of 3D-chip, as well as the creation of new technologies for designing, verification, testing, diagnosis and repairing of its components. Thus, a microworld of cyberspace goes in 3Dmeasurement not easily. Macroworld remains flat when components, computers, networks, cloud services of cyberspace are combined into the system. Which arguments could be made for the transfer of the macroworld in 3D-space? They are the following: the compactness of information, the performance of searching in cyberspace, and its dimension. The triangular flat structure of the system where all nodes are adjacent has a major drawback in the two dimensions – for encoding three nodes or edges it is necessary three codes, and this means that one code of two-bit vector is not used. Therefore, the creation of a primitive structure, where all nodes are adjacent and their number is four to make full use of the two bits code space, means re-open an amazing 3D-figure - a tetrahedron! It has six edges or distances, xor-sum of which is equal to zero. When descripting the figure two edges are redundant, which can be used to reduce the volume of information up to 66% during storage and transfer of data. Formation of cyberspace through the use of primitive tetrahedra allows optimizing (minimizing) the ratio of the structural complexity of the space to the average distance between two points. Object of research is technologies for monitoring and management of vehicles integrated with cloud services, based on the use of the existing road infrastructure, RFID, radar and radio navigation. Subject of research: traffic and road infrastructure of Ukraine and its regions, as well as advanced software and hardware RFID systems for monitoring and road management, based on the use of road controllers, global systems for positioning, navigation (GPS, GPRS), and cloud services in the Internet. The essence of research is creation of intellectual road infrastructure (IRI) – cloud service "Green Wave" for monitoring infrastructure and management of road in real-time, based on creating virtual road infrastructure (Fig. 2), integrated with road traffic controllers, RFID of vehicles in order to improve the quality and safety of vehicle movement, minimization of time and costs when realization of routes.Публікація Cluster coding in system of multilevel selective data processing(Kharkiv National University of Radio Electronics, 2016) Barannik, Vlad.; Havrylov, D.; Himenko, V.; Stetsenko, O.