105 Slices
Medium 9781601322586

A Parallel Implementation of the Modus Ponens Inference Rule in a DNA Strand-Displacement System

Hamid R. Arabnia, Hiroshi Ishii, Minoru Ito, Hiroaki Nishikawa, Fernando G. Tinetti, George A. Gravvanis, George Jandieri, and Ashu M. G. Solo CSREA Press PDF

82

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. | PDPTA'13 |

A Parallel Implementation of the Modus Ponens

Inference Rule in a DNA Strand-Displacement

System

Jack K. Horner

P.O. Box 266

Los Alamos NM 87544 USA

PDPTA 2013

Abstract

Computation implemented in DNA reactions promises to advance high-performance computing (HPC) for at least three reasons. It (1) is inherently Amdahl-scalable by reactor-volume, (2) has a power/operationsper-second(OPS) ratio that is potentially orders of magnitude smaller than that of silicon circuits, and (3) can provide a natural access-interface to DNA-based high-density information storage. In order to serve as general-purpose computing regime, DNA computing will have to support Boolean operations. Here, I describe an implementation of the modus ponens inference rule (commonly used in Boolean logic) in a

DNA strand-displacement (DSD) system.

Keywords: DNA computing, DNA strand displacement, modus ponens

1.0 Introduction

Computing implemented in DNA reactions promises to advance high-performance computing (HPC) for at least three reasons.

See All Chapters
Medium 9781601322586

Self-Timed Single Circular Pipeline for Multiple FFTs

Hamid R. Arabnia, Hiroshi Ishii, Minoru Ito, Hiroaki Nishikawa, Fernando G. Tinetti, George A. Gravvanis, George Jandieri, and Ashu M. G. Solo CSREA Press PDF

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. | PDPTA'13 |

625

Self-Timed Single Circular Pipeline for Multiple FFTs

Ryuichi TAGUCHI, Hajime OHISO, Keizo MENDORI, Kei MIYAGI, Makoto IWATA

Graduate School of Engineering, Kochi University of Technology,

Kami, Kochi, 782-8502 Japan

Abstract— Future wireless ad hoc network should accommodate different types of mobile terminals equipped with different wireless communication schemes. Especially when disaster will happen, to guarantee dependable connectivity among mobile terminals will be indispensable for delivering emergent information by using available wireless links. In order to realize such heterogeneous wireless communication systems, one of the key technologies is adaptive fast Fourier transform (FFT) engine to accept multiple wireless signal sequences with different sampling rate and different FFT point.

This paper discusses a basic idea of novel FFT engine based on the self-timed (clockless) pipeline circuit to compute multiple FFT’s in parallel. After that, the potential performance of the proposed circuit is evaluated through its FPGA implementation. Preliminary results indicate the proposed circuit could process two 4096-point FFT’s at 276

See All Chapters
Medium 9781601322586

Workstation Footprint Tactical Computing

Hamid R. Arabnia, Hiroshi Ishii, Minoru Ito, Hiroaki Nishikawa, Fernando G. Tinetti, George A. Gravvanis, George Jandieri, and Ashu M. G. Solo CSREA Press PDF

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. | PDPTA'13 |

497

Workstation Footprint Tactical Computing

S. Park1 , D. Shires1 , J. Ross2 , D. Richie3 , J. Ruloff2 , and B. Henz1

Sciences Division, U. S. Army Research Laboratory, APG, MD, USA

2 Dynamics Research Corp., Andover, MA, USA

3 Brown Deer Technology, Forest Hill, MD, USA

1 Computational

Abstract— In terms of computing hardware, heterogeneous processor types have now become an integral part in a number of modern devices. In particular, graphics processing units (GPUs) are complementing central processing units from portable smartphones to large scale supercomputers. By having a mixture of resources available, optimally mapping algorithms to computing architectures improves performance and saves power. With the advances in raw theoretical floating-point processing power of massively parallel graphics processors, mobile high-performance GPUpopulated workstations are now a feasible option for advancing compute-intensive tactical computations on-board.

See All Chapters
Medium 9781601322586

Job Parallelism using Graphical Processing Unit Individual Multi-Processors and Localised Memory

Hamid R. Arabnia, Hiroshi Ishii, Minoru Ito, Hiroaki Nishikawa, Fernando G. Tinetti, George A. Gravvanis, George Jandieri, and Ashu M. G. Solo CSREA Press PDF

578

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. | PDPTA'13 |

Job Parallelism using Graphical Processing Unit Individual

Multi-Processors and Localised Memory

D.P. Playne and K.A. Hawick

Computer Science, Massey University, North Shore 102-904, Auckland, New Zealand d.p.playne@massey.ac.nz, k.a.hawick@massey.ac.nz

Tel: +64 9 414 0800 Fax: +64 9 441 8181

April 2013

Abstract

Graphical Processing Units(GPUs) are usually programmed to provide data-parallel acceleration to a host processor. Modern

GPUs typically have an internal multi-processor (MP) structure that can be exploited in an unusual way to offer semiindependent task parallelism providing the MPs can operate within their own localised memory and apply data-parallelism to their own problem subset. We describe a combined simulation and statistical analysis application using component labelling and benchmark it on a range of modern GPU and CPU devices with various numbers of cores. As well as demonstrating a high degree of job parallelism and throughput we find a typical GPU MP outperforms a conventional CPU core.

See All Chapters
Medium 9781601322586

Optimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures*

Hamid R. Arabnia, Hiroshi Ishii, Minoru Ito, Hiroaki Nishikawa, Fernando G. Tinetti, George A. Gravvanis, George Jandieri, and Ashu M. G. Solo CSREA Press PDF

264

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. | PDPTA'13 |

Optimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures*

Tharso Ferreira1 , Antonio Espinosa1 , Juan Carlos Moure2 and Porfidio Hernández2

Computer Architecture and Operating Systems Department, University Autonoma of Barcelona, Bellaterra,

Barcelona, Spain

1 {tsouza, antonio.espinosa}@caos.uab.es 2 {juancarlos.moure, porfidio.hernandez}@uab.es

Abstract— MapReduce simplifies parallel programming, abstracting the responsibility of the programmer, such as synchronization and task management. The paradigm allows the programmer to write sequential code that is automatically parallelized. The MapReduce Frameworks developed for multi-core architectures provide large processing keys which consequently growth intermediate data structure, which in some environments causes the use of all the available main memory. Recently, with the development of

MapReduce frameworks for multi-core architectures that distribute keys through the memory hierarchy, the problem of using the entire main memory, by the data generated was minimized. But in an environment where all threads access the same hard disk, certain situations may lead a competition between the threads, to take keys generated from main memory to the hard disk, thus creating a bottleneck.

See All Chapters

See All Slices