Showing posts with label Algorithms. Show all posts
Showing posts with label Algorithms. Show all posts

Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani


Algorithms

by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani
Prologue
Look around you. Computers and networks are everywhere, enabling an intricate web of complex human activities: education, commerce, entertainment, research, manufacturing, health management, human communication, even war. Of the two main technological underpinnings of this amazing proliferation, one is obvious: the breathtaking pace with which advances in microelectronics and chip design have been bringing us faster and faster hardware.
This book tells the story of the other intellectual enterprise that is crucially fueling the computer revolution:
Books and algorithms
Two ideas changed the world. In 1448 in the German city of Mainz a goldsmith named Johann Gutenberg discovered a way to print books by putting together movable metallic pieces. Literacy spread, the Dark Ages ended, the human intellect was liberated, science and technology triumphed, the Industrial Revolution happened. Many historians say we owe all this to typography. Imagine a world in which only an elite could read these lines! But others insist that the key development was not typography, but algorithms....

The FXT library: Fast transforms and low level algorithms


The FXT library: Fast transforms and low level algorithms

Here you find
  • the FXT package
  • the "Algorithms for programmers" text (the fxtbook)
  • the amorphous FFT bucket moved to the fftpage
The FXT library
Latest FXT version: fxt-2007.01.13.tgz (approx. 950kB), distributed under the GPL.
Here are a some (well, more than 200) demos.
Much of the low level bit-magic code is shown on the bit wizardry page.
Read the short description in description.txt, the Linux Software Map (LSM) file fxt.lsm
Generated doc-oids:
  • aux0-doc.txt auxiliary routines
  • aux1-doc.txt auxiliary routines for 1-dim arrays
  • aux2-doc.txt auxiliary routines for 2-dim arrays
  • bits-doc.txt bit wizardry
  • bpol-doc.txt binary polynomials and arithmetic over GF(2**n)
  • comb-doc.txt combinatorics (combinations, partitions etc.)
  • convolution-doc.txt convolution
  • mult-doc.txt multiplication of large numbers
  • correlation-doc.txt correlation
  • dctdst-doc.txt cosine and sine transforms
  • ds-doc.txt data structures (FIFO, heap etc.)
  • fft-doc.txt fast Fourier transforms
  • realfft-doc.txt real valued FFTs
  • fht-doc.txt fast Hartley transforms
  • walsh-doc.txt Walsh transforms
  • haar-doc.txt Haar transforms
  • mod-doc.txt modular arithmetics and number theory
  • ntt.h-doc.txt number theoretic transforms
  • perm-doc.txt permutations
  • sort-doc.txt sorting and searching
  • data-doc.txt tables of mathematical data. The tables can also be accessed via the mathdata page. generated index of all docs
Click to Read More/Download

Art of Programming Contest - C Programming Tutorials | Data Structures | Algorithms


Art of Programming Contest - C Programming Tutorials | Data Structures | Algorithms

Compiled by Ahmed Shamsul Arefin

Review Note

A Computer programming contest is a pleasurable event for the budding programmers, but only a few books are available as a training manual for programming competitions. This book is designed to serve as a textbook for an algorithm course focusing on programming as well as a programming course focusing on algorithms. The book is specially designed to train students to participate in competitions such as the ACM International Collegiate Programming Contest.
The book covers several important topics related to the development of programming skills such as, fundamental concepts of contest, game plan for a contest, essential data structures for contest, Input/output techniques, brute force method, mathematics, sorting, searching, greedy algorithms, dynamic programming, graphs, computational geometry, Valladolid Online Judge problem category, selected ACM programming problems, common codes/routines for programming, Standard Template Library (STL), PC2 contest administration and team guide.The book also lists some important websites/books for ACM/ICPC Programmers.
I believe that the book will be book will be of immense use for young programmers interested in taking part in programming competitions.
Dr. M. Lutfar Rahman
Professor, Department of Computer Science and Engineering (CSE)
University of Dhaka.
Bangladesh.
Click to Read More/Download

Combinatorial Algorithms By Jeff Erickson


Combinatorial Algorithms

By Jeff Erickson

This pdf book contains senior-level algorithms for computer engineering students. They are
  • Notes on solving recurrences
  • Introduction
  • Divide and conquer
  • Dynamic programming
  • Randomization: Nuts and bolts
  • Randomized treaps
  • Randomized mincut
  • Uniform and universal hashing
  • Skip lists
  • Amortized analysis
  • Scapegoat trees and splay trees
  • Union-find
  • Fibonacci heaps
  • Graph algorithms
  • Representing and searching graphs
  • Single-source shortest paths
  • All-pairs shortest paths
  • Minimum spanning trees
  • Seminumerical algorithms
  • Fast Fourier transforms
  • Number-theoretic algorithms
  • String matching
  • String matching: Rabin-Karp
  • String matching: Knuth-Morris-Pratt
  • Computational Geometry
  • Convex hulls
  • Plane sweep algorithms
  • Polygon triangulation
  • Lower bounds
  • Information theory
  • Adversary arguments
  • Reductions and transformations
  • NP-hard, NP-easy, and NP-complete problems

Efficient Algorithms for Sorting and Synchronization By Andrew Tridgell


Efficient Algorithms for Sorting and Synchronization

By Andrew Tridgell

This thesis presents efficient algorithms for internal and external parallel sorting and remote data update. The sorting algorithms approach the problem by concentrating first on highly efficient but incorrect algorithms followed by a cleanup phase that completes the sort. The remote data update algorithm, rsync, operates by exchanging block signature information followed by a simple hash search algorithm for block matching at arbitrary byte boundaries. The last chapter of the thesis examines a number of related algorithms for text compression, differencing and incremental backup.
Internal Parallel Sorting
My first introduction to the problem of parallel sorting came from a problem in the implementation of an automatic speech recognition training program. A set of speech data needed to be normalized in order to be used as the input to a recurrent neural network system and I decided that a quick-and-dirty way of doing this would be to sort the data, then sample it at regular intervals to generate a histogram.

Foundations of Computer By Lawrence C Paulson


Foundations of Computer

By Lawrence C Paulson

This course has two objectives. First (and obvious) is to teach programming. Second is to present some fundamental principles of computer science, especially algorithm design. Most students will have some programming experience already, but there are few people whose programming cannot be improved through greater knowledge of basic principles. Please bear this point in mind if you have extensive experience and find parts of the course rather slow.
The programming in this course is based on the language ML and mostly concerns the functional programming style. Functional programs tend to be shorter and easier to understand than their counterparts in conventional languages such as C. In the space of a few weeks, we shall be able to cover most of the forms of data structures seen in programming. The course also covers basic methods for estimating efficiency.
Courses in the Computer Laboratory are now expected to supply a Learning Guide to suggest extra reading, discussion topics, exercises and past exam questions. For this course, such material is attached at the end of each lecture. Extra reading is mostly drawn from my book ML for the Working Programmer (second edition), which also contains many exercises.
Contents
Introduction
Recursive Functions
O Notation: Estimating Costs in the Limit
Lists
More on Lists
Sorting
Datatypes and Trees
Dictionaries and Functional Arrays
Queues and Search Strategies
Functions as Values
List Functionals
Polynomial Arithmetic
Sequences, or Lazy Lists
Elements of Procedural Programming
Linked Data Structures

Click to Read More/Download

Graph-Theoretic Algorithms By Therese Biedl


Graph-Theoretic Algorithms

By Therese Biedl
This document results from graduate courses that I taught at the University of Waterloo in Fall 1999, Winter 2002 and Winter 2004. Students were asked to take lecture notes in LATEX, which I then compiled and edited for this document.
The topic of graph algorithms is so unbelievably big that it is entirely hopeless to want to hold just one course about them. Even focusing on advanced graph algorithms (whatever "advanced" may mean) doesn't help, because there is still too many algorithms to cover.
The focus of this course was therefore even more specialized: I wanted to look at problems which are "normally" difficult (by which I mean a high polynomial running-time, or even NP-complete), but which can be solved faster when focusing on a specialized subclass. These notes assume that the reader is familiar with the following topics:
  • Data structures, in particular lists, stacks, queues, trees, bucket sorting, graph traversals.
  • Graph theory and combinatorial optimization, in particular graph definitions, shortest paths, °ows, matchings.
  • Analysis of algorithms, in particular the O-notation and NP-completeness.
For some of these topics, a brief review is given here (see the appendix), but for more details, the reader is referred to for example [CLRS00] for data structures and analysis of algorithms, and [Gib85] for graph theory and combinatorial optimization.

Problems on Algorithms By William Gasarch and Ian Parberry


Problems on Algorithms

By William Gasarch and Ian Parberry
The ability to devise effective and efficient algorithms in new situations is a skill that separates the master programmer from the merely adequate coder. The best way to develop that skill is to solve problems. To be effective problem solvers, master-programmers-in-training must do more than memorize a collection of standard techniques and applications -- they must in addition be able to internalize and integrate what they have learned and apply it in new circumstances. This book is a collection of problems on the design, analysis, and verification of algorithms for use by practicing programmers who wish to hone and expand their skills, as a supplementary text for students enrolled in an undergraduate or beginning graduate class on algorithms, and as a self-study text for graduate students who are preparing for the qualifying (often called "breadth" or "comprehensive") examination on algorithms for a Ph.D. program in Computer Science or Computer Engineering. It is intended to augment the problem sets found in any standard algorithms textbook.
Recognizing that a supplementary text must be cost-effective if it is to be useful, the author made two important and perhaps controversial decisions in order to keep its length within reasonable bounds. The first is to cover only what it is considered to be the most important areas of algorithm design and analysis. Although most instructors throw in a "fun" advanced topic such as amortized analysis, computational geometry, approximation algorithms, number-theoretic algorithms, randomized algorithms, or parallel algorithms, it had been chosen not to cover these areas. The second decision is not to search for the origin of the problems used. A lengthy discussion of the provenance of each problem would help make this hook more scholarly, but would not make it more attractive for its intended audience -- students and practicing programmers.

Sorting and Searching Algorithms: A Cookbook By Thomas Niemann


Sorting and Searching Algorithms: A Cookbook

By Thomas Niemann

This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know C, and that you are familiar with concepts such as arrays and pointers.
The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by techniques for implementing dictionaries, structures that allow efficient search, insert, and delete operations. The last section illustrates algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is available at the site listed below.
Introduction
Arrays and linked lists are two basic data structures used to store information. We may wish to search, insert or delete records in a database based on a key value. This section examines the performance of these operations on arrays and linked lists...

Algorithms and Complexity by Herbert S. Wilf


Algorithms and Complexity

by Herbert S. Wilf
For the past several years mathematics majors in the computing track at the University of Pennsylvania have taken a course in continuous algorithms (numerical analysis) in the junior year, and in discrete algorithms in the senior year. This book has grown out of the senior course as I have been teaching it recently. It has also been tried out on a large class of computer science and mathematics majors, including seniors and graduate students, with good results.
Selection by the instructor of topics of interest will be very important, because normally I’ve found that I can’t cover anywhere near all of this material in a semester. A reasonable choice for a first try might be to begin with Chapter 2 (recursive algorithms) which contains lots of motivation. Then, as new ideas are needed in Chapter 2, one might delve into the appropriate sections of Chapter 1 to get the concepts and techniques well in hand. After Chapter 2, Chapter 4, on number theory, discusses material that is extremely attractive, and surprisingly pure and applicable at the same time. Chapter 5 would be next, since the foundations would then all be in place. Finally, material from Chapter 3, which is rather independent of the rest of the book, but is strongly connected to combinatorial algorithms in general, might be studied as time permits.
Throughout the book there are opportunities to ask students to write programs and get them running. These are not mentioned explicitly, with a few exceptions, but will be obvious when encountered. Students should all have the experience of writing, debugging, and using a program that is nontrivially recursive, for example. The concept of recursion is subtle and powerful, and is helped a lot by hands-on practice. Any of the algorithms of Chapter 2 would be suitable for this purpose. The recursive graph algorithms are particularly recommended since they are usually quite foreign to students’ previous experience and therefore have great learning value.
In addition to the exercises that appear in this book, then, student assignments might consist of writing occasional programs, as well as delivering reports in class on assigned readings. The latter might be found among the references cited in the bibliographies in each chapter.
I am indebted first of all to the students on whom I worked out these ideas, and second to a number of colleagues for their helpful advice and friendly criticism. Among the latter I will mention Richard Brualdi, Daniel Kleitman, Albert Nijenhuis, Robert Tarjan and Alan Tucker. For the no-doubt-numerous shortcomings that remain, I accept full responsibility.
This book was typeset in TEX. To the extent that it’s a delight to look at, thank TEX. For the deficiencies in its appearance, thank my limitations as a typesetter. It was, however, a pleasure for me to have had the chance to typeset my own book. My thanks to the Computer Science department of the University of Pennsylvania, and particularly to Aravind Joshi, for generously allowing me the use of TEX facilities.

Algorithmic Information Theory By G J Chaitin


Algorithmic Information Theory

By G J Chaitin
The aim of this book is to present the strongest possible version of G¨odel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs.
One half of the book is concerned with studying , the halting probability of a universal computer if its program is chosen by tossing a coin. The other half of the book is concerned with encoding as an algebraic equation in integers, a so-called exponential diophantine equation.
G¨odel's original proof of his incompleteness theorem is essentially the assertion that one cannot always prove that a program will fail to halt. This is equivalent to asking whether it ever produces any output. He then converts this into an arithmetical assertion. Over the years this has been improved; it follows from the work on Hilbert's 10th problem that G¨odel's theorem is equivalent to the assertion that one cannot always prove that a diophantine equation has no solutions if this is the case.
In our approach to incompleteness, we shall ask whether or not a program produces an in nite amount of output rather than asking whether it produces any; this is equivalent to asking whether or not a diophantine equation has in nitely many solutions instead of asking whether or not it is solvable.
If one asks whether or not a diophantine equation has a solution for N dierent values of a parameter, the N dierent answers to this question are not independent; in fact, they are only log2 N bits of information. But if one asks whether or not there are in nitely many solutions for N dierent values of a parameter, then there are indeed cases in which the N dierent answers to these questions are independent mathematical facts, so that knowing one answer is no help in knowing any of the others. The equation encoding has this property.
When mathematicians can't understand something they usually assume that it is their fault, but it may just be that there is no pattern or law to be discovered!
How to read this book: This entire monograph is essentially a proof of one theorem, Theorem D in Chapter 8. The exposition is completely self-contained, but the collection Chaitin (1987c) is a useful source of background material. While the reader is assumed to be familiar with the basic concepts of recursive function or computability theory and probability theory, at a level easily acquired from Davis (1965) and Feller (1970), we make no use of individual results from these elds that we do not reformulate and prove here. Familiarity with LISP programming is helpful but not necessary, because we give a selfcontained exposition of the unusual version of pure LISP that we use, including a listing of an interpreter. For discussions of the history and signi cance of metamathematics, see Davis (1978), Webb (1980), Tymoczko (1986), and Rucker (1987).
Although the ideas in this book are not easy, we have tried to present the material in the most concrete and direct fashion possible. We give many examples, and computer programs for key algorithms. In particular, the theory of program-size in LISP presented in Chapter 5 and Appendix B, which has not appeared elsewhere, is intended as an illustration of the more abstract ideas in the following chapters.

Planning Algorithms By Steven M. LaValle


Planning Algorithms

By Steven M. LaValle
This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning, planning under uncertainty, sensor-based planning, visibility, decision-theoretic planning, game theory, information spaces, reinforcement learning, nonlinear systems, trajectory planning, nonholonomic planning, and kinodynamic planning.
Planning to Plan
Planning is a term that means different things to different groups of people. Robotics addresses the automation of mechanical systems that have sensing, actuation, and computation capabilities (similar terms, such as autonomous systems are also used). A fundamental need in robotics is to have algorithms that convert high-level specifications of tasks from humans into low-level descriptions of how to move. The terms motion planning and trajectory planning are often used for these kinds of problems. A classical version of motion planning is sometimes referred to as the Piano Mover’s Problem. Imagine giving a precise computer-aided design (CAD) model of a house and a piano as input to an algorithm. The algorithm must determine how to move the piano from one room to another in the house without hitting anything. Most of us have encountered similar problems when moving a sofa or mattress up a set of stairs. Robot motion planning usually ignores dynamics and other differential constraints and focuses primarily on the translations and rotations required to move the piano. Recent work, however, does consider other aspects, such as uncertainties, differential constraints, modeling errors, and optimality. Trajectory planning usually refers to the problem of taking the solution from a robot motion planning algorithm and determining how to move along the solution in a way that respects the mechanical limitations of the robot.

Average Case Analysis of Algorithms on Sequences by Wojciech Szpankowski


Average Case Analysis of Algorithms on Sequences

by Wojciech Szpankowski
A timely book on a topic that has witnessed a surge of interest over the last decade, owing in part to several novel applications, most notably in data compression and computational molecular biology. It describes methods employed in average case analysis of algorithms, combining both analytical and probabilistic tools in a single volume.
  • Tools are illustrated through problems on words with applications to molecular biology, data compression, security, and pattern matching.
  • Includes chapters on algorithms and data structures on words, probabilistic and analytical models, inclusion-exclusion principles, first and second moment methods, subadditive ergodic theorem and large deviations, elements of information theory, generating functions, complex asymptotic methods, Mellin transform and its applications, and analytic poissonization and depoissonization.
  • Written by an established researcher with a strong international reputation in the field.

Algorithms for Modular Elliptic Curves By J. E. Cremona


Algorithms for Modular Elliptic Curves

By J. E. Cremona
This book is in three sections. First, we describe in detail an algorithm based on modular symbols for computing modular elliptic curves: that is, one-dimensional factors of the Jacobian of the modular curve $X_0(N)$, which are attached to certain cusp forms for the congruence subgroup $\Gamma_0(N)$. In the second section, various algorithms for studying the arithmetic of elliptic curves (defined over the rationals) are described. These are for the most part not new, but they have not all appeared in book form, and it seemed appropriate to include them here. Lastly, we report on the results obtained when the modular symbols algorithm was carried out for all $N\le1000$. In a comprehensive set of tables we give details of the curves found, together with all isogenous curves (5113 curves in all, in 2463 isogeny classes. [In the first edition, these numbers were given as 5089 and 2447 respectively, as the curves of conductor 702 were inadvertently omitted.] Specifically, we give for each curve the rank and generators for the points of infinite order, the number of torsion points, the regulator, the traces of Frobenius for primes less than 100, and the leading coefficient of the $L$-series at $s=1$; we also give the reduction data (Kodaira symbols, and local constants) for all primes of bad reduction, and information about isogenies..........
Contents of Second Edition
  • Introduction
  • Modular symbol algorithms
  • Modular Symbols and Homology
  • The upper half-plane, the modular group and cusp forms
  • The duality between cusp forms and homology
  • Real structure
  • Modular symbol formalism
  • Rational structure and the Manin-Drinfeld Theorem
  • Triangulations and homology
  • M-symbols and $\Gamma_0(N)$
  • Conversion between modular symbols and M-symbols
  • Action of Hecke and other operators
  • Working in $H^+(N)$
  • Modular forms and modular elliptic curves
  • Splitting off one-dimensional eigenspaces
  • $L(f,s)$ and the evaluation of $L(f,1)/\period(f)$
  • Computing Fourier coefficients
  • Computing periods I
  • Computing periods II: Indirect method
  • Computing periods III: Evaluation of the sums
  • Computing $L^{(r)}(f,1)$
  • Obtaining equations for the curves
  • Computing the degree of a modular parametrization
  • Modular Parametrizations
  • Coset representatives and Fundamental Domains
  • Implementation for $\Gamma_0(N)$ Appendix to Chapter II. Examples
  • Example 1. N=11
  • Example 2. N=33
  • Example 3. N=37
  • Example 4. N=49
  • Elliptic curve algorithms
  • Terminology and notation
  • The Kraus--Laska--Connell algorithm and Tate's algorithm
  • The Mordell--Weil group I: finding torsion points
  • Heights and the height pairing
  • The Mordell--Weil group II: generators
  • The Mordell--Weil group III: the rank
  • The period lattice
  • Finding isogenous curves
  • Twists and complex multiplication
  • The tables
  • Elliptic curves
  • Mordell--Weil generators
  • Hecke eigenvalues
  • Birch--Swinnerton-Dyer data
  • Parametrization degrees
  • Bibliography

Algorithms for programmers By Jorg Arndt


Algorithms for programmers

By Jorg Arndt

This is a draft of a book about selected algorithms. The audience in mind are programmers who are interested in the treated algorithms and actually want to create and understand working and reasonably optimized code.
The style varies somewhat which I do not consider bad per se: While some topics (as fast Fourier transforms) need a clear and explicit introduction others (like the bit wizardry chapter) seem to be best presented by basically showing the code with just a few comments.
The pseudo language Sprache is used when I see a clear advantage to do so, mainly when the corresponding C++ does not appear to be self explanatory. Larger pieces of code are presented in C++. C programmers do not need to be shocked by the ‘++’ as only a rather minimal set of the C++ features is used. Some of the code, especially in part 3 (Arithmetical algorithms), is given in the pari/gp language as the use of other languages would likely bury the idea in technicalities.

Computer Programming Algorithms Directory


Computer Programming Algorithms Directory

Encryption Algorithms
  • Advanced Encryption Standard (AES), Data Encryption Standard (DES), Triple-DES and Skipjack Algorithms - Offers descriptions of the named encryption algorithms.
  • Blowfish - Describes the Blowfish encryption algorithm. Offers source code for a variety of platforms.
  • KremlinEncrypt - Cryptography site provides an overview of cryptography algorithms and links to published descriptions where available.
  • PowerBASIC Crypto Archives - Offers PowerBASIC source code for many algorithms including:
    Hashing - RIPEMD-160, MD5, SHA-1, SHA-256, CRC-16, CRC-32, Adler-32, FNV-32, ELF-32
    Encryption - RSA-64, Diffie-Hellman-Merkle Secure Key Exchange, Rijndael, Serpent, Twofish, CAST-128, CAST-256, Skipjack, TEA, RC4, PC1, GOST, Blowfish, Caesar Substitutional Shift, ROT13
    Encoding - Base64, MIME Base64, UUEncode, yEnc, Neuronal Network, URLEncode, URLDecode
    Compression - LZ78, LZSS, LZW, RLE, Huffman, Supertiny
    Psuedo-Random Number Generation (PRNG) - Mersenne Twister Number Generator, Cryptographic PRNG, MPRNG, MOAPRNG, L'Ecuyer LCG3 Composite PRNG, W32.SQL-Slammer
  • TEA - Tiny Encryption Algorithm - Describes the TEA encryption algorithm with C source code.
  • xICE - Has links towards the bottom of the page to the description of the xice encryption algorithm as well as the xice software development kit which contains the algorithm's full source code in C++, ASP, JScript, Ruby, and Visual Basic 6.0.
Genetic Algorithms
  • Artificial Life - Offers executable and source for ant food collection and the travelling salesman problems using genetic algorithms
  • Genetic Ant Algorithm - Source code for a Java applet that implements the Genetic Ant Algorithm based upon the model given in Koza, Genetic Programming, MIT Press
  • Introduction to Genetic Algorithms - Introduces fundamentals, offers Java applet examples
  • Jaga - Offers a free, open source API for implementing genetic algorithms (GA) and genetic programming (GP) applications in Java
  • SPHINcsX - Describes a methodology to perform a generalized zeroth-order two- and three-dimensional shape optimization utilizing a genetic algorithm
GIS (Geographic Information Systems) Algorithms
  • Efficient Triangulation Algorithm Suitable for Terrain Modelling - Describes algorithm and includes links to source code for various languages
  • Prediction of Error and Complexity in GIS Algorithms - Describes algorithms for GIS sensitivitiy analysis
  • Point in Polygon Algorithm - Describes algorithm
Sorting Algorithms
  • Andrew Kitchen's Sorting Algorithms - Describes parallel sorting algorithms:
    Odd-Even Transposition Sort has a worst case time of O(n), running on n processors. Its absolute speed up is O(log n), so its efficiency is O((log n)/n)
    Shear Sort has a worst case time of O(n½ log n), running on n processors. Its absolute speed up is O(n½), so its efficiency is O(1/n½)
  • Ariel Faigon's Library of Sorting Algorithms - C source code for a variety of sorting algorithms including Insertion Sort, Quick Sort, Shell Sort, Gamasort, Heap Sort and Sedgesort (Robert Sedgewick quicksort optimization)
  • Flash Sort - Describes the FlashSort algorithm which sorts n elements in O(n) time
  • Michael Lamont's Sorting Algorithms - Describes common sorting algorithms:
    O(n²) Sorts - bubble, insertion, selection and shell sorts
    O(n log n) Sorts - heap, merge and quick sorts
  • Sequential and Parallel Sorting Algorithms - Describes many sorting algorithms:
    Quicksort
    Heapsort
    Shellsort
    Mergesort
    Sorting Networks
    Bitonic Sort
    Odd-Even Mergesort
    LS3-Sort
    4-way Mergesort
    Rotate Sort
    3n-Sort
    s^2-way Mergesort
Search Algorithms
  • Exact String Matching Algorithms - Details 35 exact string search algorithms.
  • Finding a Loop in a Singly Linked List - Outlines several methods for identifying loops in a singly linked list.
  • Fibonaccian search - Describes an O(log n) search algorithm for sorted arrays that is faster than a binary search for very large arrays.
Tree Algorithms
  • B-Trees: Balanced Tree Data Structures - Introduction to B-Trees. Describes searching, splitting and inserting algorithms.
Computational Geometry Algorithms
  • CGAL - Offers open source C++ library of computational geometry algorithms.
  • FastGEO - Offers source code for a library of computational geometry algorithms such as geometrical primitives and predicates, hull construction, triangulation, clipping, rotations and projections using the Object Pascal language.
  • Wykobi - FastGEO library ported to C++.
Phonetic Algorithms
  • Lawrence Philips' Metaphone Algorithm - Describes an algorithm which returns the rough approximation of how an English word sounds. Offers a variety of source code listings for the algorithm.
  • Soundex Algorithms - Describes the NYSIIS VS Soundex and R. C. Russell's soundex algorithms.
Project Management Algorithms
  • Calculations for Critical Path Scheduling - Describes the algorithms for calculating critical paths with both ADM and PDM networks.
  • Resource Leveling Using the Minimum Moment Heuristic - Offers a Windows 3.1 download that includes a .PDF document describing the algorithm.
  • Resource-Constrained Project Scheduling - (.PDF) Describes several algorithms for resource leveling:
    Basic Single Mode RCPSP
    Basic Multi-Mode RCPSP
    Stochastic RCPSP
    Bin Packing related RCPSP
    Multi-resource constrained project scheduling problem (MRCPSP)
Miscellaneous Algorithms
  • AI Horizon - Has a variety of algorithms, from basic computer science data structures such as 2-3 trees to AI-related algorithms such as minimax and a discussion of machine learning algorithms.
  • Hash Algorithms - Overview and source code (in C, Pascal and Java) for many general purpose hashing algorithms.
  • Porter Stemming Algorithm - Describes a process for removing the commoner morphological and inflexional endings from words in English. Its main use is as part of a term normalisation process that is usually done when setting up Information Retrieval systems.
  • Rubik's Cube - Solves a Rubic's Cube using the BestFast search algorithm and profile tables.
  • Simulated Annealing - The fundamental idea is to allow moves resulting in solutions of worse quality than the current solution (uphill moves) in order to escape from local minima. The probability of doing such a move is decreased during the search.
  • The Stony Brook Algorithm Repository - Offers a collection of algorithm implementations for over seventy of the most fundamental problems in combinatorial algorithms:
  1. Data Structures - Dictionaries, Priority Queues, Suffix Trees and Arrays, Graph Data Structures, Set Data Structures, Kd-Trees
  2. Numerical Problems - Solving Linear Equations, Bandwidth Reduction, Matrix Multiplication, Determinants and Permanents, Linear Programming/Simplex Method, Random Number Generation, Factoring and Primality Testing, Arbitrary Precision Arithmetic, Knapsack Problem, Discrete Fourier Transform
  3. Combinatorial Problems - Sorting, Searching, Median and Selection, Permutations, Subsets, Partitions, Graphs, Calendrical Calculations, Job Scheduling, Satisfiability
  4. Graph Problems - Polynomial Time Problems (Connected Components, Topological Sorting, Minimum Spanning Tree, Shortest Path, Transitive Closure and Reduction, Matching, Eulerian Cycle / Chinese Postman, Edge and Vertex Connectivity, Network Flow, Drawing Graphs Nicely, Drawing Trees, Planarity Detection and Embedding)
  5. Graph Problems - Hard Problems (Clique, Independent Set, Vertex Cover, Traveling Salesman Problem, Hamiltonian Cycle, Graph Partition, Vertex Coloring, Edge Coloring, Graph Isomorphism, Steiner Tree, Feedback Edge/Vertex Set
  6. Computational Geometry - Robust Geometric Primitives, Convex Hull, Triangulation, Voronoi Diagrams, Nearest Neighbor Search, Range Search, Point Location, Intersection Detection, Bin Packing, Medial-Axis Transformation, Polygon Partitioning, Simplifying Polygons, Shape Similarity, Motion Planning, Maintaining Line Arrangements, Minkowski Sum
  7. Set and String Problems - Set Cover, Set Packing, String Matching, Approximate String Matching, Text Compression, Cryptography, Finite State Machine Minimization, Longest Common Substring, Shortest Common Superstring
  • NIST Dictionary of Algorithms and Data Structures - Some entries have links to implementations.

Optimization Algorithms on Matrix Manifolds Written by P.-A. Absil, R. Mahony & R. Sepulchre


Optimization Algorithms on Matrix Manifolds

Written by P.-A. Absil, R. Mahony & R. Sepulchre

Foreword by Paul Van Dooren
Constrained optimization is quite well established as an area of research, and there exist several powerful techniques that address general problems in that area. In this book a special class of constraints is considered, called geometric constraints, which express that the solution of the optimization problem lies on a manifold. This is a recent area of research that provides powerful alternatives to the more general constrained optimization methods. Classical constrained optimization techniques work in an embedded space that can be of a much larger dimension than that of the manifold. Optimization algorithms that work on the manifold have therefore a lower complexity and quite often also have better numerical properties (see, e.g., the numerical integration schemes that preserve invariants such as energy). The authors refer to this as unconstrained optimization in a constrained search space.
The idea that one can describe difference or differential equations whose solution lies on a manifold originated in the work of Brockett, Flaschka, and Rutishauser. They described, for example, isospectral flows that yield time-varying matrices which are all similar to each other and eventually converge to diagonal matrices of ordered eigenvalues. These ideas did not get as much attention in the numerical linear algebra community as in the area of dynamical systems because the resulting difference and differential equations did not lead immediately to efficient algorithmic implementations.
An important book synthesizing several of these ideas is Optimization and DynamicalSystems (Springer, 1994), by Helmke and Moore, which focuses on dynamical systems related to gradient flows that converge exponentially to a stationary point that is the solution of some optimization problem. The corresponding discrete-time version of this algorithm would then have linear convergence, which seldom compares favorably with state-of-the-art eigenvalue solvers.
The formulation of higher-order optimization methods on manifolds grew out of these ideas. Some of the people that applied these techniques to basic linear algebra problems include Absil, Arias, Chu, Dehaene, Edelman, Eld´en, Gallivan, Helmke, H¨uper, Lippert, Mahony, Manton, Moore, Sepulchre, Smith, and Van Dooren. It is interesting to see, on the other hand, that several basic ideas in this area were also proposed by Luenberger and Gabay in the optimization literature in the early 1980s, and this without any use of dynamical systems.
In the present book the authors focus on higher-order methods and include Newton-type algorithms for optimization on manifolds. This requiresa lot more machinery, which cannot currently be found in textbooks. The main focus of this book is on optimization problems related to invariant subspaces of matrices, but this is sufficiently general to encompass well the two main aspects of optimization on manifolds: the conceptual algorithm and its convergence analysis based on ideas of differential geometry, and the efficient numerical implementation using state-of-the-art numerical linear algebra techniques.
The book is quite deep in the presentation of the machinery of differential geometry needed to develop higher-order optimization techniques, but it nevertheless succeeds in explaining complicated concepts with simple ideas. These ideas are then used to develop Newton-type methods as well as other superlinear methods such as trust-region methods and inexact and quasi-Newton methods, which precisely put more emphasis on the efficient numerical implementation of the conceptual algorithms.
This is a research monograph in a field that is quickly gaining momentum. The techniques are also being applied to areas of engineering and robotics, as indicated in the book, and it sheds new light on methods such as the Jacobi-Davidson method, which originally came from computational chemistry. The book makes a lot of interesting connections and can be expected to generate several new results in the future.
Read More/Download

Data Structures and Algorithms: Annotated Reference with Examples By Granville Barnett and Luca Del Tongo


Data Structures and Algorithms: Annotated Reference with Examples

By Granville Barnett and Luca Del Tongo

Every book has a story as to how it came about and this one is no different, although we would be lying if we said its development had not been somewhat impromptu. Put simply this book is the result of a series of emails sent back and forth between the two authors during the development of a library for the .NET framework of the same name (with the omission of the subtitle of course!). The conversation started of something like, Why don't we create a more aesthetically pleasing way to present our pseudocode?" After a few weeks this new presentation style had in fact grown into pseudocode listings with chunks of text describing how the data structure or algorithm in question works and various other things about it. At this point we thought, What the heck, let's make this thing into a book!" And so, in the summer of 2008 we began work on this book side by side with the actual library implementation. When we started writing this book the only things that we were sure about with respect to how the book should be structured were:
  1. always make explanations as simple as possible while maintaining a moderately fine degree of precision to keep the more eager minded reader happy; and
  2. inject diagrams to demystify problems that are even moderately challenging to visualize (. . . and so we could remember how our own algorithms worked when looking back at them); and finally
  3. present concise and self explanatory pseudo code listings that can be ported easily to most mainstream imperative programming languages like C++, C#, and Java.
A key factor of this book and its associated implementations is that all algorithms (unless otherwise stated) were designed by us, using the theory of the algorithm in question as a guideline (for which we are eternally grateful to their original creators). Therefore they may sometimes turn out to be worse than the "normal" implementations|and sometimes not. We are two fellows of the opinion that choice is a great thing. Read our book, read several others on the same subject and use what you see fit from each (if anything) when implementing your own version of the algorithms in question. Through this book we hope that you will see the absolute necessity of understanding which data structure or algorithm to use for a certain scenario. In all projects, especially those that are concerned with performance (here we apply an even greater emphasis on real-time systems) the selection of the wrong data structure or algorithm can be the cause of a great deal of performance pain
Therefore it is absolutely key that you think about the run time complexity and space requirements of your selected approach. In this book we only explain the theoretical implications to consider, but this is for a good reason: compilers are very different in how they work. One C++ compiler may have some amazing optimisation phases specically targeted at recursion, another may not, for example. Of course this is just an example but you would be surprised by how many subtle differences there are between compilers. These differences which may make a fast algorithm slow, and vice versa. We could also factor in the same concerns about languages that target virtual machines, leaving all the actual various implementation issues to you given that you will know your language's compiler much better than us...well in most cases. This has resulted in a more concise book that focuses on what we think are the key issues.
One final note: never take the words of others as gospel; verify all that can be feasibly veri¯ed and make up your own mind. We hope you enjoy reading this book as much as we have enjoyed writing it.
Read More/Download