ϟ

Todor Ivanov

Here are all the papers by Todor Ivanov that you can download and read on OA.mg.
Todor Ivanov’s last known institution is . Download Todor Ivanov PDFs here.

Claim this Profile →
DOI: 10.58542/jbota.v61i01.112
2024
Loosened hip joint prosthesis-decision options
For the period 2013-2015, 36 patients underwent revision surgery because of secondary developed complications after primary hip joint replacement surgery. 14 of them were male and 23 – female, with average age 73 years. 
 We examined 4 groups of patients depends on the complications type:
 - Aseptic loosening in 23 patients, appeared 10 years’ average following primary hip prosthesis surgery.
 - Septic loosening in 2 patients revealed on second to the tenth month post operatively. In two other patients we observed infected hematoma.
 - Hip joint prosthesis (luxation) because of trauma or extremely wrong motion we notice in 5 patients. In one of them we revealed acetabular cup malposition.
 - Periprosthetic fractures with aseptic loosening in 3 patients.
 All 36 patients had primary cemented hip joint prosthesis.
 In the first group depends on which prosthesis parts were loosened we did acetabular cup and femoral stem replacement in 15 patients, only acetabular cup replacement in 4 patients and in 3 patients only femoral stem replacement.
 In 2 patients from second group we replaced the primary prosthesis by total spacer for 8 months followed by revision prosthesis surgery. In 2 haematoma patients we performed debridement, lavage-drainage and long term antibiotic treatment.
 In 2 patients of third group we replaced the primary head with a longer one and in 3 other patients we replaced the primary acetabular cup by shoulder collar cemented cup.
 In 2 patients of fourth group we replaced the primary acetabular cup and femoral stem by shoulder collar cemented cup and by revisionary stem delivered by Implant and Zimmer (Revitan). In the third patient we implanted revisionary femoral stem (Implant).
 3 years after revisionary hip joint prosthesis surgery we monitor good prosthesis stability and good functional scores
DOI: 10.48550/arxiv.1512.08417
2015
Cited 6 times
Evaluating Hive and Spark SQL with BigBench
The objective of this work was to utilize BigBench [1] as a Big Data benchmark and evaluate and compare two processing engines: MapReduce [2] and Spark [3]. MapReduce is the established engine for processing data on Hadoop. Spark is a popular alternative engine that promises faster processing times than the established MapReduce engine. BigBench was chosen for this comparison because it is the first end-to-end analytics Big Data benchmark and it is currently under public review as TPCx-BB [4]. One of our goals was to evaluate the benchmark by performing various scalability tests and validate that it is able to stress test the processing engines. First, we analyzed the steps necessary to execute the available MapReduce implementation of BigBench [1] on Spark. Then, all the 30 BigBench queries were executed on MapReduce/Hive with different scale factors in order to see how the performance changes with the increase of the data size. Next, the group of HiveQL queries were executed on Spark SQL and compared with their respective Hive runtimes. This report gives a detailed overview on how to setup an experimental Hadoop cluster and execute BigBench on both Hive and Spark SQL. It provides the absolute times for all experiments preformed for different scale factors as well as query results which can be used to validate correct benchmark execution. Additionally, multiple issues and workarounds were encountered and solved during our work. An evaluation of the resource utilization (CPU, memory, disk and network usage) of a subset of representative BigBench queries is presented to illustrate the behavior of the different query groups on both processing engines. Last but not least it is important to mention that larger parts of this report are taken from the master thesis of Max-Georg Beer, entitled "Evaluation of BigBench on Apache Spark Compared to MapReduce" [5].
DOI: 10.4028/www.scientific.net/msf.618-619.345
2009
Cited 7 times
Decision and Design Methodologies for the Lay-Out of Modular Dies for High-Pressure-Die-Cast-Processes
Within the project “Decision and design methodology for the lay-out of modular dies” which is part of the Cluster of Excellence “Integrative Production Technology for High-Wage Countries”, established and financed by the German Research Foundation (DFG), the main objective is setting guidelines for cost-effective and high quality high pressure die casting (HPDC) moulds. The strong increase in product variants and the growing demand for individualised products results in a growing complexity of all related products. The main objective of this project is bridging the existing gap between individual manufacturing and mass production. A new perspective on the value creation chain of HPDC-dies has to be established. First of all, the methodology for the lay-out of modular dies consists in an analysis of the already produced die cast moulds. For the development of modules, standard parts, and different die types, a wide range of HPDC-dies will be compared with each other and subsequently clustered along specific criteria such as size or number of core sliders. Another step consists in optimising setting-up time and maintenance. The as-is state in different companies will be examined. With this knowledge, new concepts will be developed, keeping a modular configuration of the different parts involved in mind. Concepts for modular core sliders, guides and ejectors will be developed and will be investigated for further use. Based on this information, the decision and design methodology for the lay-out of modular HPDC -dies will be examined and developed throughout the process.
DOI: 10.48550/arxiv.1411.4044
2014
Cited 4 times
Benchmarking DataStax Enterprise/Cassandra with HiBench
This report evaluates the new analytical capabilities of DataStax Enterprise (DSE) [1] through the use of standard Hadoop workloads. In particular, we run experiments with CPU and I/O bound micro-benchmarks as well as OLAP-style analytical query workloads. The performed tests should show that DSE is capable of successfully executing Hadoop applications without the need to adapt them for the underlying Cassandra distributed storage system [2]. Due to the Cassandra File System (CFS) [3], which supports the Hadoop Distributed File System API, Hadoop stack applications should seamlessly run in DSE. The report is structured as follows: Section 2 provides a brief description of the technologies involved in our study. An overview of our used hardware and software components of the experimental environment is given in Section 3. Our benchmark methodology is defined in Section 4. The performed experiments together with the evaluation of the results are presented in Section 5. Finally, Section 6 concludes with lessons learned.
DOI: 10.1051/epjconf/201921403006
2019
Cited 3 times
Improving efficiency of analysis jobs in CMS
Hundreds of physicists analyze data collected by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider using the CMS Remote Analysis Builder and the CMS global pool to exploit the resources of the Worldwide LHC Computing Grid. Efficient use of such an extensive and expensive resource is crucial. At the same time, the CMS collaboration is committed to minimizing time to insight for every scientist, by pushing for fewer possible access restrictions to the full data sample and supports the free choice of applications to run on the computing resources. Supporting such variety of workflows while preserving efficient resource usage poses special challenges. In this paper we report on three complementary approaches adopted in CMS to improve the scheduling efficiency of user analysis jobs: automatic job splitting, automated run time estimates and automated site selection for jobs.
DOI: 10.1007/s11740-011-0318-x
2011
Design methodology for modular tools
DOI: 10.1109/greens.2015.11
2015
Evaluating the Energy Efficiency of Data Management Systems
Nowadays developers and end users of data management systems are challenged with the reduction of the "energy consumption footprint" of existing implementations and configurations. In other words, the energy efficiency has to be optimized, either by increasing the performance or by consuming less resources. In fact, there is a big number of factors that influence the performance and energy efficiency of a particular data management system. For example, the replacement of hardware components or the surrounding operating system can have a significant impact. Both developers and end users put much effort into finding performance "bottlenecks", better hardware resource utilization and configurations. Besides, when it comes to a scale-out scenario, end users often face the situation to find a hardware configuration that offers both a reasonable performance and energy consumption, i.e. A resource planning. This paper proposes a new approach to evaluate the performance of a data management system and the impact on the energy efficiency with the goal to optimize it. The approach introduces a Queued Petri Nets model, whose simulation runs are intended to drastically reduce the investments, both in time and hardware, compared to traditional ways, for example regression and compatibility tests. The model's prediction in terms of performance and energy efficiency were evaluated and compared to the actual experimental results. On average the predicted and experimental results (response time and energy efficiency) differ by 24 percent.
DOI: 10.5281/zenodo.7621172
2023
"Towards the EU Food Safety Forum: shaping together the new collaborative platform" FoodSafety4EU PRE-FORUM 2022 "The new sustainability regulation: how to integrate it into food safety?"
DOI: 10.1177/15459683231186986
2023
Motor Decision-Making as a Common Denominator in Motor Pathology and a Possible Rehabilitation Target
Despite the substantial progress in motor rehabilitation, patient involvement and motivation remain major challenges. They are typically addressed with communicational and environmental strategies, as well as with improved goal-setting procedures. Here we suggest a new research direction and framework involving Neuroeconomics principles to investigate the role of Motor Decision-Making (MDM) parameters in motivational component and motor performance in rehabilitation. We argue that investigating NE principles could bring new approaches aimed at increasing active patient engagement in the rehabilitation process by introducing more movement choice, and adapting existing goal-setting procedures. We discuss possible MDM implementation strategies and illustrate possible research directions using examples of stroke and psychiatric disorders.
DOI: 10.59642/jrtmed.1.2023.01
2023
The cooperative and the circular economy: embracing collaboration towards sustainability
The article investigates how cooperatives and the circular economy complement each other, highlighting their relevance in building a sustainable future. By integrating circular economy principles into cooperatives, sustainable resource management and opportunities for development are fostered. Cooperatives, with their collaborative nature and focus on member interests, provide an ideal framework for implementing circular economy practices. They promote efficient resource collection, reuse, and recycling, reducing environmental impact and dependence on finite resources. Collaboration among cooperative members, local communities, and other stakeholders facilitates knowledge exchange and resource sharing, vital for circular economy implementation. The article also highlights key aspects of this collaborative approach. The collaboration between the members of the cooperative, the local community and other interested entities contributes to the exchange of knowledge, experiences and resources necessary for the implementation of the circular economy. By joining forces, these key participants can develop innovative initiatives and projects that deliver economic, social and environmental benefits. Finally, the authors emphasize the importance of promoting and supporting cooperatives within the circular economy model. By promotion of collaboration and creation of an enabling environment for the development of circular economy-oriented cooperatives, a sustainable future can be constructed, where resources are managed efficiently, waste is minimized, and communities develop in a sustainable way.
DOI: 10.59957/jctm.v58i5.118
2023
Ryegrass as a feedstock for bioethanol production
DOI: 10.1051/epjconf/201921403056
2019
Improving the Scheduling Efficiency of a Global Multi-Core HTCondor Pool in CMS
Scheduling multi-core workflows in a global HTCondor pool is a multi-dimensional problem whose solution depends on the requirements of the job payloads, the characteristics of available resources, and the boundary conditions such as fair share and prioritization imposed on the job matching to resources. Within the context of a dedicated task force, CMS has increased significantly the scheduling efficiency of workflows in reusable multi-core pilots by various improvements to the limitations of the GlideinWMS pilots, accuracy of resource requests, efficiency and speed of the HTCondor infrastructure, and job matching algorithms.
DOI: 10.1007/978-3-642-21067-9_5
2011
Hybrid Production Systems
DOI: 10.1145/2513591.2513643
2013
A hybrid page layout integrating PAX and NSM
The paper explores a hybrid page layout (HPL), combining the advantages of NSM and PAX. The design defines a continuum between NSM and PAX supporting both efficient scans minimizing cache faults and efficient insertions and updates. Our evaluation shows that HPL fills the PAX-NSM performance gap.
2011
Casting of microstructured shark skin surfaces and applications on aluminum casting parts
DOI: 10.4028/www.scientific.net/msf.618-619.581
2009
Replication of Microscale Features via Investment Casting Using the Example of an Aluminium Intake Manifold of a Gasoline Engine with an Inner Technical Shark Skin Surface
Within the project “Functional Surfaces via Micro- and Nanoscaled Structures” an investment casting process to produce 3-dimensional functional surfaces down to a structural size of 1µm on near-net-shape-casting parts will be developed. The common way to realise functional microscale features on metallic surfaces is to use laser ablation, electro discharge machining or micro milling. The handicap of these processes is their limited productivity. In order to raise the efficiency, microscale features will be replicated by use of the investment casting process. The main research objective deals with the investigation of the single process steps with regard to the moulding accuracy. Actual results concerning making of the wax pattern and the ceramic mould as well as the casting of an Aluminium alloy will be presented. By using the example of an intake manifold of a gasoline race car engine a technical shark skin surface was defined in order to reduce the drag of the in-coming air. Possible process stategies to realise microscale features on an inner surface of a casting part were developed.
DOI: 10.1162/lmj_a_01033
2018
The Music of Human Hormones
In this study, the authors take on the challenge to translate biological form (science) into musical form (art). Through scientifically developed methodology, the authors link two aspects of human experience that influence human emotions: hormones, from the inside, and music, from the outside. The authors develop an original algorithm, which they use to represent the properties and the effects of the human hormone oxytocin in a musical composition. The authors performed a neurological test to verify the accuracy of the musical interpretation and investigated the parallel neurological impacts of the hormone’s biological and musical form. This article describes the preliminary results of the study.
DOI: 10.1051/epjconf/201921403002
2019
Exploring GlideinWMS and HTCondor scalability frontiers for an expanding CMS Global Pool
The CMS Submission Infrastructure Global Pool, built on Glidein-WMS andHTCondor, is a worldwide distributed dynamic pool responsible for the allocation of resources for all CMS computing workloads. Matching the continuously increasing demand for computing resources by CMS requires the anticipated assessment of its scalability limitations. In addition, the Global Plmust be able to expand in a more heterogeneous environment, in terms of resource provisioning (combining Grid, HPC and Cloud) and workload submissi.A dedicated testbed has been set up to simulate such conditions with the purpose of finding potential bottlenecks in the software or its configuration. This report provides a thorough description of the various scalabilitydimensions in size and complexity that are being explored for the future Global Pool, along with the analysis and solutions to the limitations proposed with the support of the GlideinWMS and HTCondor developer teams.
DOI: 10.1051/epjconf/202024503016
2020
Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era
Efforts in distributed computing of the CMS experiment at the LHC at CERN are now focusing on the functionality required to fulfill the projected needs for the HL-LHC era. Cloud and HPC resources are expected to be dominant relative to resources provided by traditional Grid sites, being also much more diverse and heterogeneous. Handling their special capabilities or limitations and maintaining global flexibility and efficiency, while also operating at scales much higher than the current capacity, are the major challenges being addressed by the CMS Submission Infrastructure team. These proceedings discuss the risks to the stability and scalability of the CMS HTCondor infrastructure extrapolated to such a scenario, thought to be derived mostly from its growing complexity, with multiple Negotiators and schedulers flocking work to multiple federated pools. New mechanisms for enhanced customization and control over resource allocation and usage, mandatory in this future scenario, are also described.
2014
Herstellung mikrostrukturierter Oberflächen im Feingießverfahren
2011
Gießen mikrostrukturierter Oberflächen am Beispiel eines Luftmengenbegrenzers mit innenliegender Haifischhaut
DOI: 10.1007/978-3-319-48160-9_123
2011
Investment Casting of Surfaces with Microholes and Their Possible Applications
DOI: 10.1088/0960-1317/21/8/085026
2011
Replication of specifically microstructured surfaces in A356-alloy via lost wax investment casting
A common way of realizing microstructural features on metallic surfaces is to generate the designated pattern on each single part by means of microstructuring technologies such as e.g. laser ablation, electric discharge machining or micromilling. The disadvantage of these process chains is the limited productivity due to the additional processing of each part. The approach of this work is to replicate microstructured surfaces from a master pattern via lost wax investment casting in order to reach a higher productivity. We show that microholes of different sizes (∅ 15–22 µm at depths of 6–14 µm) can be replicated in AlSi7Mg-alloy from a laser-structured master pattern via investment casting. However, some loss of molding accuracy during the multi-stage molding process occurs. Approximately 50% of the original microfeature's heights are lost during the wax injection step. In the following process step of manufacturing a gypsum-bonded mold, a further loss in the surface quality of the microfeatures can be observed. In the final process step of casting the aluminum melt, the microfeatures are filled without any loss of molding accuracy and replicate the surface quality of the gypsum mold. The contact angle measurements of ultrapure water on the cast surfaces show a decrease in wettability on the microstructured regions (75°) compared to the unstructured region (60°).
2011
Numerical Simulation of the Wax Injection Process
2012
Shortening Process Chain for Manufacturing Components with Functional surfaces Via Micro- and Nanostructures
DOI: 10.1007/978-3-642-20693-1_5
2011
Hybride Produktionssysteme
DOI: 10.1002/9781118061992.ch123
2011
Investment Casting of Surfaces with Microholes and Their Possible Applications
This chapter contains sections titled: Introduction Experimental Details Results and Discussion Conclusions, Prospects and Possible Applications
2010
Transient 3D numerical simulation of directly coupled mold filling and solidification in investment casting of A356 using the VOF multi-phase approach
2010
Concept for die modularisation applied on profile extrusion
DOI: 10.1088/1742-6596/898/4/042048
2017
A comparison of different database technologies for the CMS AsyncStageOut transfer database
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user's output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate.
DOI: 10.1088/1742-6596/898/9/092036
2017
Efficient monitoring of CRAB jobs at CMS
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN.
DOI: 10.4018/978-1-5225-1759-7.ch021
2017
The Heterogeneity Paradigm in Big Data Architectures
This chapter introduces the concept of heterogeneity as a perspective in the architecture of big data systems targeted to both vertical and generic workloads and discusses how this can be linked with the existing Hadoop ecosystem (as of 2015). The case of the cost factor of a big data solution and its characteristics can influence its architectural patterns and capabilities and as such an extended model based on the 3V paradigm is introduced (Extended 3V). This is examined on a hierarchical set of four layers (Hardware, Management, Platform and Application). A list of components is provided on each layer as well as a classification of their role in a big data solution. Request access from your librarian to read this chapter's full text.
2008
Produktionstechnik für Hochlohnländer
DOI: 10.1007/978-3-030-78307-5_4
2022
Big Data and AI Pipeline Framework: Technology Analysis from a Benchmarking Perspective
Abstract Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of technical architectures for Big Data and AI systems. Experiences from many projects in the Big Data PPP program has shown that a number of projects use similar architectural patterns with variations only in the choice of various technology components in the same pattern. The project DataBench has developed a Big Data and AI Pipeline Framework, which is used for the description of pipeline steps in Big Data and AI projects, and supports the classification of benchmarks. This includes the four pipeline steps of Data Acquisition/Collection and Storage, Data Preparation and Curation, Data Analytics with AI/Machine Learning, and Action and Interaction, including Data Visualization and User Interaction as well as API Access. It has also created a toolbox which supports the identification and use of existing benchmarks according to these steps in addition to all of the different technical areas and different data types in the BDV Reference Model. An observatory, which is a tool, accessed via the toolbox, for observing the popularity, importance and the visibility of topic terms related to Artificial Intelligence and Big Data technologies has also been developed and is described in this chapter.
DOI: 10.18421/tem112-45
2022
Extended Model of Code Orchestration and Deployment Platform
This paper is focused on the process of continuous integration and respective orchestration tooling. It provides a summary on existing tooling with feature analysis and applications. The research explores techniques, processes, and solutions for code orchestration. It includes a comparison of the modern platforms and discusses the topic of extendibility of such products by presenting an architectural model that supports the integration of general-purpose extensions.
DOI: 10.1051/epjconf/201921403004
2019
Producing Madgraph5_aMC@NLO gridpacks and using TensorFlow GPU resources in the CMS HTCondor Global Pool
The CMS experiment has an HTCondor Global Pool, composed of more than 200K CPU cores available for Monte Carlo production and the analysis of da.The submission of user jobs to this pool is handled by either CRAB, the standard workflow management tool used by CMS users to submit analysis jobs requiring event processing of large amounts of data, or by CMS Connect, a service focused on final stage condor-like analysis jobs and applications that already have a workflow job manager in place. The latest scenario canbring cases in which workflows need further adjustments in order to efficiently work in a globally distributed pool of resources. For instance, the generation of matrix elements for high energy physics processes via Madgraph5_aMC@NLO and the usage of tools not (yet) fully supported by the CMS software, such as Ten-sorFlow with GPUsupport, are tasks with particular requirements. A special adaption, either at the pool factory level (advertising GPU resources) or at the execute level (e.g: to handle special parameters that describe certain needs for the remote execute nodes during submission) is needed in order to adequately work in the CMS global pool. This contribution describes the challenges and efforts performed towards adaptingsuch workflows so they can properly profit from the Global Pool via CMS Connect.
DOI: 10.37129/2313-7509.2019.12.2.119-130
2019
DETERMINATION OF THE BASIC REQUIREMENTS FOR THE DESIGN AND TECHNOLOGICAL PROCESS OF MANUFACTURING AMMUNITION WEAPONS
The article identifies the need to create basic requirements for the technological process of designing and manufacturing ammunition for small arms based on the analysis of the assets of previous years in our country and foreign countries. The basic principles of classification of ammunition for small arms are presented, the characteristics of the firing process that are necessary for further ballistic calculations are briefly analyzed, the necessary loading conditions are determined, the dynamic and ballistic characteristics of bullets are determined, their reliable functioning during firing is evaluated. The methods of calculations of the main parameters of extraction and durable characteristics of the cartridges are given. The design procedure is determined by the data on the construction and purpose of different types of cartridges, methods of testing and acceptance of products are considered. The processes of production of metal elements of ammunition for small arms are highlighted separately. It is known that the only source of supply of ammunition for small arms for the Armed Forces of Ukraine was the Luhansk Cartridge Plant, which apparently used sources and literature for the design and production of ammunition since the Soviet Union. Unfortunately, during the years of Ukraine's independence, both during the peacetime and during the armed conflict in eastern Ukraine, attention was not paid to developing requirements for the design and production of not only small arms ammunition, but also of ammunition as a whole, which is why it becomes unclear, what modern private investors rely on and use, offering and already partially meeting the needs of the Armed Forces of Ukraine with some types of ammunition. It is no secret that the scale of the cost of ammunition for small arms, even in peacetime, counts by the billions of pieces, which requires considerable material costs not only for their production, but also for transportation, storage, and testing. Accordingly, the purpose of the article is to determine the basic requirements for the basics of design and technological process of production of ammunition for small arms, taking into account the constant development and modernization of not only the cartridges themselves, but also small arms.
DOI: 10.1109/icai50593.2020.9311315
2020
Table of contents
Analysis of the control accuracy in steady state of a hybrid PI controller V. Uzunov and I
1992
ASSISTANT: computer consulting system for choice and evaluation of education technology