Neural Network Hardware

Latest revision: 13 Nov 1998

When implemented in hardware, neural networks can take advantage of their inherent parallelism and run orders of magnitude faster than software simulations. There exist a number of commercial NNW chips and neurocomputers, VME boards, etc., as well as systems built by HEP groups for on-line trigger applications. We list here some of the neural network chips, boards, and systems that are available commercially, are almost commercial, or are just of special interest to those in HEP.

For an overview of hardware neural networks, see the hypertext version or postscript version of the paper ( Lindsey, Lindblad 1994). Also, check out the FAQ on Neural Networks p.7 - Hardware NNW.

Smart Technologies SIG site (formerly Cognizer Almanac) products site lists many companies big and small in NNW's and AI.

Also, DTI Neurocomputing Web  has a directory of NNW companies.

See also the publication by Aybay I, Cetinkaya S., and Halici U., "Classification of Neural Network Hardware", Neural Network World, 1/96, pp.11-27 (no web version unfortunately) which has over 90 references.

March 1998 - Online lecture reviewing the status of hardware neural networks at
Note 1: There have been a great number of prototype chips built over the years, typically by university EE groups, and presented at NNW conferences (e.g. see the IJCNN Proceedings). Very few of these, however, have reached commercialization and we have not listed such chips here unless they were specifically targeted for HEP applications.
Note 2: Although we have prices on some of these systems, we generally do not include them here. Prices are subject to considerable change and academic groups can often get substantial discounts anyway. Contact the vendors for current pricing.
Note 3: The information here was obtained from 1993 till current year from various sources, including direct contacts with the vendors, conference proceedings, magazine articles, etc., and should be roughly up to date. However, no effort was made to contact every single vendor to guarantee that all the items here are still available. In fact, some of the smaller companies themselves may no longer exist. Contact the vendors for latest info.
Note 4:   * indicates hardware that is no longer available commercially but are still listed here for historical purposes.? -probably not available
but no definitive information yet.

Commercially available NNW Chips and/or Systems

NNW PC accelerators and other cards

Non-Commercial or prototype NNW Chips and/or Systems

HEP/NNW Hardware

Accurate Automation Corp(AAC)
Description: From ACC press clippings: "Accurate Automation's Neural Network Processor (NNP) uses a true multiple-instruction multiple-data (MIMD) architecture capable of running multiple chips in parallel without performance degradation..."

"Each chip houses a high-speed 16-bit processor with on-chip storage for synaptic weights. The processor executes just nine assembly language instructions. For instance, only one instruction is required to take the output of a neuron and pass it through the threshold-based transfer function typical of most neural learning methods.
"Interprocessor communication among multiple NNPs is handled by a proprietary interprocessor bus. A full complement of software development tools for the NNP accommodates any neural network method with a specialized programming language. A software toolbox is included which has already coded up assembly language programs for all the popular neural learning methods optimized for multiple parallel processors. For instance, toolbox already implements backpropagation-of-errors, Hopfield nets, adaptive resonance theory (ART), perceptions, functional-link, Adaline and Madaline learning methods, with recurrent and Kohonen learning now under construction at Accurate Automation.
"The chips are only available from Accurate Automation already mounted on boards. An ISA version for PC ($4,995) comes with one chip and room for adding daughter boards with up to nine more chips ($3,995 each). The VME version ($19,995) of the board--the one used for LoFLYTE--also includes two TMS 340 DSPs plus room to plug in nine more neural chips. The DSPs allow the board to perform signal preprocessing for the neural network, including digital filtering, pattern recognition, optimization and sparse matrix processing."
For info on the ISA card and VME card, see the AAC products pages. (Note: the Telebyte PC card discussed below is based on this chip.)
Learning: Programmable to implement any particular neural network training algorithms.
Performance: 140MCPS for a single chip. Up to 1.4GCPS for a 10 processor system.
Vendors: Accurate Automation Corp., 7001 Shallowford Rd., Chattanooga, TN., 37421, USA. Tel: 423-894-4646, fax: 423-894-4645, video phone: 423-510-8448.
An article on AAC's project with the US Air Force to control an unmanned subscale 5.5 Mach waverider (LoFlyle Demonstration Aircraft) with their NNW is in Aviation Week & Space Tech, April 3, 1995, pp. 78-79. See also the list of news reports at ACC's LoFlyte page.
Go to top of page
Adaptive Solutions CNAPS
Description: The CNAPS system is a full NNW development system based on the proprietary CNAPS-1064 Digital Parallel Processor chip that has 64 sub-processors operating in a SIMD mode. Each sub-processor has its own 4k bytes of local memory and a fixed point arithmetic unit to perform 1-bit, 8bit, or 16-bit integer arithmetic. Each sub-processor can emulate one or more neurons and multiple chips can be ganged together.

The CNAPS Server II is the software and hardware development environment. It comes with a 4-slot VME crate in a standalone cabinet and a contoller for the ethernet interface to a workstation host. A single CNAPS Server II VME-card can be configured for 64, 128, 256, and 512 processor configurations, and a card has 16M Bytes of data storage.
The processors must be programmed to execute a given NNW algorithm. The CNAPS tools include CNAP-C (a C-compiler and debugger), Quicklib (hand-coded standard functions callable form CNAPS-C), BuildNet (pre-coded neural network algorithms), and CodeNet (assembly language debugger.)
The CNAPS/PC ISA card uses 1,2 or 4 of the new CNAPS-1016 parallel processor chips or two of the 1064 chips to obtain 16, 32, 64 or 128 CNAPS processors. The respective multiply and accumulate rates are 320M, 640M, 1.28G, and 2.56G per sec. Data rate over the ISA bus is 20MByte/s. Optional mezzanine board provides 3 direct I/O interfaces for a possible total rate of 80MBytes/s. The CNAPS development systems is similar to that for the workstation: CNAPS-C compiler, assembler, Debugger, API, QuickLib, and example programs.
BrainMaker software has also now been ported to run the CNAPS PC cards.
Learning: Learning algorithms can be programmed. Back-propagation and several other algorithms come in the BuildNet package.
Performance: With a single chip, 1.28 Billion multiply/accumulates per sec. With 4 chips, 10.24 Billion multiply/accumulates per sec. Back-propagation feedforward performs 1.16 Billion mult./accum/sec and 293 Million weight updates/sec with 1 chip, and 5.80/1.95, resp., with 4 chips.
Vendors: Adaptive Solutions, Inc., 1400 N.W. Compton Drive, Suite 340, Beaverton, OR 97006, USA; tele (503)690-1236.
In Europe, Cromemco GmbH, Dietrich-Bonhoeffer-Str.4, D061350 Bad Homburg; tele 0049-6172-3 30 67, fax 0049-6172-30 45 19.
Also, John Haynes, Marketing Manager, (503) 690-1236,
References: H. McCartor, "Back Propagation Implementation on the Adaptive Solutions CNAPS Neurocomputer Chip", proc. of NIPS-3, "Advances in Neural Information Processing Systems 3",ed. R. Lippmann et al., 1991, pp. 1028-1031, Morgan Kaufmann Pub.
Comments: Adaptive Solutions has ceased the marketing of any components, including the CNAPS chip and cards, directly to the public market and instead is concentrating on high end OCR processing. However, the CNAPS is still sold through secondary vendors and OEM manufacturers such as California Scientific. The H1 experiment is using 10 of the CNAPS VME cards for its NNW level 2 trigger.
Go to top of page
HNC 100 NAP Chip and SNAP Neurocomputer * No longer available
Description: The HNC 100 NAP (Numerical Array Processor) chip contains a 1-D systolic array of 4 arithmetic cells in SIMD ring. The processing is 32 bit floating point. A 17-bit address bus allows maximum of 512Kbytes of off-chip memory for each cell. A single NAP chip can perform 160 32-bit MFLOPS.

The SNAP VME boards standard configuration includes 4 of the HNC 100 NAP chips. I/O is over 4GBytes/sec. A two board set provides 500M CPS in feed-forward mode and 128M CUPS for BP of large networks.
Vendors: HNC Inc., 5501 Oberlin Dr., San Diego Ca. 92121-1718. tel: (619)546-8877, fax: (619)452-6524.
Go to top of page
Description: The ZISC is a digital chip with 64 8-bit inputs and 36 radial basis function neurons. Multiple chips can be easily cascaded together to create networks of arbitrary size. Input vector V is compared to stored a prototype vector P for each neuron. The 14-bit neuron output (for 16K possible categories) is a distance value calculated according to 2 selectable norms: (1) dist = sum of abs(Vi-Pi), i=1,64; (2) dist = Max((Vi-Pi)), i=1,64. Input is 8-bit serial. It takes 3.5microsecs to load the 64 elements and another 0.5microsecs for the classification signal to appear.

Learning: On-chip learning, i.e. storage of training vectors as prototypes. Learning processing of a vector takes about another 2microsecs beyond the 4microsecs for loading and evaluation.
Performance: At 16MHz, 4microsec classification of a 64 component 8-bit vector.
Vendors: IBM Microelectronics Division, Essonnes Development Lab., IBM France, 225 Boulevard John Kennedy, F 91105 Corbeil-Essonnes, France. Fax:(1)6088-4920. See the WWW pages for the IBM Essonnes Lab and for the ZISC.
Available from Neuroptics Technologies, USA,131A Stony Circle, Suite 500, Santa Rosa, California 95401,707-578-2310, FAX 577-7424,
References: ZISC 1994, Eide 1994 discusses a PC board. A VME card is also being developed.
Comments: The first generation chip uses very conservative VLSI technology. Later generations are planned and they will use progressively more aggressive technology for increased speed and neuron density.
Go to top of page
datafactory (formerly INCO) SAND (Simple Applicable Neural Device) Neurochip & NeuroLution PCI Board
Description: "SAND/1 (Simple Applicable Neural Device) is a cascadable, systolic processor array designed for fast processing of neural networks. Feedforward networks, RBF networks and Kohonen feature maps can be mapped onto the neurochip SAND/1. These most common types of neural networks covers about 75% of all important applications. The performance of a single SAND/1 chip is 200 MCPS (million connections per second) due to four parallel 16 bit multipliers and eight 40 bit adders working in one clock cycle. The clock rate of SAND/1 is 50 Mhz.". The chip was developed by the Research Center Karlsruhe and Institute for Microelectronics, Stuttgart (IMS). It is being commercialized by datafactory of Germany.

A PCI Neuroboard has also been developed. The card holds up to 4 SAND/1 neurochips and can achieve 800MCPS. The board runs under the and NeuroLution  programming environment. The card can accelerate feedforward, radial basis function, and Kohonen networks. NeuroLution manager has a menu and graphical standard user interface and is equipped with a variety of network models.The CONNECT scripting language allows for customization of neural network models.
The SAND neurochip based systems are being developed to implement trigger systems for the KASCADE and AUGER astrophysics experiments.
References: SAND web page at Karlsruhe is The datafactory Information Systems web page is at See also the SAND/1 postscript report at Also, Neural Network Chops for Trigger Purposes in High Energy Physics, H. Gemmeke, W. Eppler, T. Fischer, A. Menchikov,S. Neursser, Proc. of Nuc. Science Symposium (NSS) 1996, submitted to IEEE Trans. on Nuclear Science.
Vendors: datafactory Informationssysteme GmbH (formerly INCO), Stöhrerstrasse 17, D-04347 Leipzig, Phone +49/341/244950, Fax +49/341/244952,,
Go to top of page
Innovative Computing Technologies
Description: "Our Feedback Neural Net Chips deliver execution speeds equivalent to six billion connections per second with on-chip learning capability. Unlike its feedforward counterpart, the net can be trained at speeds up to 20 MHz. This speed allows real-time scan binary images for any desired set of templates at video rate. Optimized circuitry delivers this performance with very low power consumption, less than 100mW."; from one of their WWW pages.

Vendors: IC Tech, Inc, 2157 University Park Dr., Okemos, MI 48864, USA, e-mail:, tel:(517)349-4544, fax:(517)349-2255, WWW:
Go to top of page
Intel 80170NX Electrically Trainable Analog Neural Network (ETANN)
Description: Analog neural network with 64 inputs (0-3v), 16 internal biases, and 64 neurons with sigmoidal transfer functions (external gain control available.) Two layer feedforward networks can be implemented with 64 inputs, 64 hidden neurons, and 64 output neurons using the two 80x64 weight matrices. Hidden layer outputs are clocked back through 2nd weight matrix to do the output layer processing. Alternatively, a single 64 layer network with 128 inputs can be implemented using both matrices and clocking in 2 sets of 64 inputs. Weights are stored in non-volatile floating gate synapses (Gilbert analog multipliers). Weights have roughly 6-bit precision.

The iNNTS learning system is a PC based developmemt environment which includes software (emulation and interfacing to chip), a PC expansion bus card and an external training module for a single chip. Also, an 8-chip board is available.
Learning: No on-chip learning. Emulation is done in software and the weights are downloaded to the chip. However, then a chip-in-the-loop training stage is required to bring the performance close to the emulation level.
Performance: About 8 micro-secs propagation time for a 2-layer network. This is equivalent to roughly 2 billion mult/accumulates per sec.
References: 80170NX Specification booklet, Intel Corp. June 1991.
Holler 1989 See also Lindsey 1992 and Akkila2 1993 for examples of applications in HEP.
Vendors:  * Intel  got out of the neural network business, including both the ETANN and the Nestor NI1000 development.
Comments: A detector tracking test, the CDF neural network triggers and the WA-92 NNW demonstration test have used the ETANN sucessfully (See the On-line Reference page.) The analog processing requires some care in maintaining constant control voltages but otherwise the chip is quite stable and reliable.
Go to top of page
Irvine Sensors 3DANN 
Description: Irvine Sensors has developed a high density FET stacking technology that allows circuits to be built in the vertical as well as
the planar dimensions.  Using this stackable circuits, NNWs , which they refer to as 3DANN - 3D Artificial Neural Networm, of very
high density can be built. The company in Sept. 19998 won a US Army contract to  "demonstrate the feasibility of an ultra high
density interconnect to enhance its 3D Artificial Neural Network(TM) (3DANN(TM)) technology. The new interconnect is part of
Irvine Sensors' planned progression of technologies intended to eventually lead to a "Silicon Brain", a recognition system conceived to
emulate performance of the human central nervous system." The VIP/Balboa Image Processor uses

Vendors: Irvine Sensors Company Contact: Lynn O'Mara, Director, Corporate Communications,  E-mail:, Direct/Vm.: 1-714-444-8718,  FAX: 1-714-444-8840
Go to top of page
Micro Devices MD-1220  * No longer available
Description: The MD-1220 is a CMOS VLSI device with eight internal neural circuits, each processing the synaptic data in parallel. Data after weight * input is stored in accumulators as partial dot products. At the end the of the input frame, the data is applied to a threshold function. The internal accumulator provides 16-bit resolution and therefore the processing or connection rate is 16 clock cycles. Frequency is 20 MHz. Example: 64 neurons processing at 1.25 Mbits/sec * 64 = 80 million connections/sec. Another example: It takes 800 ns to process any synaptic input (or bias). In the interconnect mode all eight inputs + bias are processed before outputting result. This means 7.2 us. In the other mode it takes 12.8 us. So a 3 layer nnw 32 x 32 x 8 takes (32 + 1)*800 for each layer = 79.2 us  Threshold functions could be hard limiter, threshold limiter, sigmoid or clamped linear.

Vendors: Micro Devices, 5695B Beggs Rd., Orlando, FL. 32810-2603, USA.
Go to top of page
MCE MT19003 Neural Instruction Set Processor
Description: A digital processor chip using signed 12-bit internal neuron values, with 16-bit multiplier and 35-bit accumulator. Network input values, bias values, synapse, and neuron values are held in off-chip memory. Network processing is also guided by a given program in off-chip memory using 7 element neural instruction set. Neuron values can be scaled by a transfer function using 4 available tables.

Learning: No on-chip learning.
Performance: 1 synapse per clock cycle (40MHz nominal).
References: MT19003 - NISP Data Sheet
Vendors: Micro Circuit Engineering, Alexandra Way, Ashchurch Business Center, Tewkesbury, Gloucestershire GL20 8TB. Tel: (0684)297777, Fax: (0684)299435. See additional info at the DTI NNW entry.
Go to top of page
RC Module Neuroprocessor NM6403
Description: "NM6403 is a high performance microprocessor with super scalar architecture. The architecture includes control unit, address calculation  and scalar processing units, node to support vector operations with elements of variable bit length. There are two identical programmable interfaces to work with  any memory types as well as two communication ports hardware  compatible with those of DSP MS320C4x which permit to  build multi-processor systems..." See NM6403 home page.

Learning: No on-chip learning built-in. Intended as a component in a programmable neurocomputer.
Performance: Scalar operations: 50 MIPS;  75 MOPS for 32 bit data;  Vector operations:  1.2Biillion multiplications and additions per second (for 8 bit matrix-matrix multiplications);
References: NNW research page ( at RC Module,  A VLSI Digital Neural Processor with Variable Word Length of Operands (,
Vendors: RC Module (,  RUSSIA, 125190, MOSCOW, P.O.Box 166, Tel: 152-4661,152-4631, Fax: 152-3168, 152-4661, email:
Go to top of page
National Semiconductor NeuFuz/COP8 Microcontrollers
Description: This system uses a combination of neural network and fuzzy logic software to generate code for National's COP8 microcontrollers. Fuzzy logic provides a very flexible and powerful method for control applications. In cases where the "if-then" rules of the system are known, the rules can be expressed in a systematic and straightforward way using the fuzzy logic membership function and fuzzy rule methodology. However, there can be many situations where one doesn't know a priori the "if-then" rules. In such a case, a neural network can be used to learn the rules and then the neural network's "knowledge" can be mapped into the fuzzy rules and membership functions. National has several packages:
Learning: software learning only
Vendors: National Semiconductor, 2900 Semiconductor Dr., Santa Clara, Ca. 95952-8090. USA,Tel: (800)272-9959. National Semiconductor GmbH, Industriestrasse 10, D-8080 Furstenfeldbruck, Germany, tel:011-49-8141-103-0.
Go to top of page
Nestor NI1000
Description: This is a network with Radial Basis Function neurons. During learning, prototype vectors are stored under the assumption that they are picked randomly from the parent distribution. Up to 1024 protypes can be stored, with 256 dimensions,5-bits per dimension. Each prototype is then assigned to a given middle layer neuron. This neuron, in turn, is assigned to an output neuron that represents the particular class for that vector (up to 64 classes allowed). All middle layer neurons that correspond to the same class are assigned to the same output neuron. In recall mode, an input vector is compared to each prototype, in parallel, and if the distance between them is above a given threshold, it fires, which in turn fires the corresponding output, or class, neuron.

The learning algorithms can vary and don't simply save every prototype. For example, the particular algorithm can decide whether an input vector is saved as a new prototype or is considered too similar to an existing prototype. The setting of the thresholds can also depend on the algorithm. See Nestor's NI-1000 chip page for more details.
Several cards are also available.
Learning: Two on-chip learning algorithms are available: the Probabilistic Neural Network (PNN) and the Restricted Coulomb Energy (RCE). Also, the micro-coding can be modified for user defined algorithms.
Performance: 40k 256 element patterns per sec (25micro-secs /patt)
Vendors: Nestor, Inc., One Richmond Square, Providence, RI 02906, USA. Tel:(401)331-9640; fax: (401)331-7319.
Thinking Silicon Ltd., Courtland, Latchmoor Way, Gerrands Cross, Buckinghamshire, SL9 8LW, England, tel:+44(0)1753 891722. Contact: John Greenway at
Go to top of page
NeuraLogix (Now Adaptive Logic) NLX110, NLX230, NLX420  * No longer available
Description: NLX110 Eight Channel Pattern Comparator - compares 8 unknown patterns to 1 reference pattern or compares 8 references to 1 unknown pattern. A built in neural net enhances performance. Pattern lengths can be up to 1Mbit. Each channel operates at 20MHz. May be expanded to arrays of 32 wide. Hamming or Euclidian distance metrics.

NLX230 - Fuzzy Micro Contoller - general purpose fuzzy logic engine with a built-in neural network for enhanced performance. 30M rules per sec operation. 16 fuzzifiers and 64 rules.
NLX420 Neural Processor Slice is a digital chip with 16 processing elements (PE's). The "slice" architecture allows for building of multi-chip configurations. Using time multiplexing , one NLX420 configured for 16-bit inputs, can emulate a maximum of 1048576 neurons, each having 64k synaptic inputs. Weights are stored in external RAM. Transfer functions are implemented with user loaded piecewise continuous approximation. Input data can be in form of 1, 4, 8, and 16-bit integer values.
The ADS420 Neural Processor Slice Development System includes a PC AT board, with up to 4 chips, and software.
Learning: No on-chip learning capabilities.
Performance: up to 300 Million mult-accumulates/sec with single chip.
Vendors: Adaptive Logic Inc., 800 Charcot Av., Suite 112, San Jose, Ca. 95131, tel: 408-383-7200, fax: 408-383-7201,,
References: Data sheet for NeuraLogix NLX420, June 1992.
Comments: They have dropped the NLX420 NPS chip to concentrate on fuzzy logic controllers. However, surplus NLX420 chips are probably available from some suppliers for small projects.
Go to top of page
NeuriCam NC3001 (TOTEM) chip
Description: Developed jointly by IRST and the University of Trento, under a grant by INFN and the European Union, the NC3001 (TOTEM) digital processor uses a SIMD architecture. It has 32 multiply-and-accumulate processors using 2's complement arithmetic. Data is represented by 16 bits for inputs, 8-bit for the weights, and 32 bits for the results. Each internal processor has local storage for up to 128 weights to reduce the I/O bandwidth required at run-time. Multiple chips can be combined to build larger networks.

Chips are available as single components or mounted on PC boards for the ISA, PC-104 (single Totem chip), VME (dual Totem chip), PCI and Compact-PCI (dual Totem chip) busses, "interface-ready" for addition as a neural coprocessor of a host computer. Software is available to ease the use of developement of neural systems based on these boards under the Unix and Windows-NT operating systems.
Learning: The chip is optimized for the Reactive Tabu Search algorithm but could be used with other algorithms. For "chip-in-the-loop" training the chip could carry out the forward processing step, as in back-propagation training, much faster than the CPU.
Performance: 1GCPS at 32MHz. A 16-16-1 network runs forward processing in about 2microsecs.
References: The TOTEM chips and related systems are now available from, a recently established company. Additional references can be found under the Docs folder on Neuricam's site. Also, see Lindsey et al, "Experience with the Reactive Tabu Search"
Vendors: Chips and boards are available from NeuriCam, Via S. Maddalena 2, 38100 Trento, ITALY, Phone: +39 (0461) 260-552, Fax: +39 (0461) 260-617 email:
Go to top of page
Oxford Computer Chips and Modules  ?
Description: On-board Learning Chip (OBL) is a digital, programmable chip that combines memory and processing in one chip. OBL implements any nnw learning algorithm and transfer function on the chip. A modular design for parallel processing available.

A236 Parallel DSP Module : small mutichip module containing the A2236 parallel DSP chip and 1 to 8 MB of 2ns access time RAM. Fully programmable and implements any signal processing and pattern recognition function. 160M MAC/s in 16bit.
N010 Parallel DSP Module : small mutichip module containing the N010 parallel DSP chip and 1 to 16 MB of 1ns access time RAM. Fully programmable and implements any signal processing and pattern recognition function. 640M MAC/s in 16bit.
Intelligent Image Sensors : family of high resolution image sensors with integrated feature extraction networks used for high speed optical inspectors. The sensors are mask-programmable, store up to 50 million weithgs and perform 1G CPS depending on configuration.
Optical Memory Foundry Service : application specific, nonvolatile, glass, transmissive, analog-optical memory chips. Custom designed optical devices that store hi-resolution images or neural network, gray-scale, optical weight matrices. As large as 25mm square, pixels as small as 1micron, and each pixel has an intrinsic gray scale.
Pap Smear Micro Library with Test Patterns #2B: an application specific, nonvolatile, glass, transmissive, analog-optical memory chip mounted on a microscopic slide. The chip stores 20 life-size images from cervical Pap smears plus a collection of geometric test patterns and random data. Applications of the Pap Smear Micro Library are in testing of cell recognition and in neural network optical weight matrices.
References AI Expert Magazine, Vol. 10, Num. 6, June 1995, p.41.
Vendors: Oxford Computers Inc., 39 Old Good Hill Rd., Oxford, Conn. 06478. tel:(203) 881-0891, fax:(203) 888-1146, email:
Go to top of page
Philips L-Neuro chips
Description: Philips has previously offered the L-Neuro 1.0 chip and has now announced development of the follow-on L-Neuro 2.3 chip.

L-Neuro 1.0 contains 16 PE's with 16-bit registers. However, the 16-bits can for a PE can be treated as 16 1-bit, 4 4-bit, 2 8-bit or 1 16-bit neuron(s). So a single chip can implement, for example, a 256 1-bit neuron network. There is a 1kByte weight memory buffer on-chip to provide 1024 8-bit or 512 16-bit weights. The transfer functions are implemented off-chip so that multiple chips can be cascaded together. The 16 PE outputs aarre read out serially. The chips are most easily interfaced to Transputer host processors. A processing rate of 100MCPS for 1-bit mode and 26MCPS for 8-bit mode for a single chip are reported. The chip can be programmed for learning and rates of 160MCUPS and 32 MCUPS are reported for 1-bit and 8-bit modes, resp. A transputer based system with up to 112 L-Neuro 1.0's is available from Telmat, which sells Transputer systems. Also, a PC board with a single L-Neuro board is reportedly available for around $500.
L-Neuro 2.3 is a second generation that builds on the L-Neuro 1.0 experience. It is cascadable as for the L-Neuro 1.0 and consists of 12 processors that may be operated in either parallel (SIMD) or pipelined modes. It has 16-bit weights and neuron outputs in basic mode. Each of the 12 processors contains 128 16-bit registers for storing weights and states, a 16-to-32 bit multiplier, a 32-bit ALU and a barrel shifter. Micro-coding of the chip is user available for customizing the chip to a given application. With a 60MHz clock the chip can compute a 32-bit weighted sup over 12 16-bit inputs every 17ns. This provides 720MCPS or 27 times the 8-bit mode of the L-Neuro 1.0. In learning the 12 weights are updated in parallel within 34ns. The chip is aimed not only at NNW applications but also applications in signal-processing, image processing and fuzzy logic (using its ability to extract mimimum and maximum values in 12 element vectors.)
Vendors: The chip was developed at Laboratoires d'Electronic Philips (LEP) 22, Avenue Descartes, BP 15, 94453 Limeil-Brevannes Cedex, France. Contact Yannick Deville at
References: For L-Neuro 1.0 see Mauduit et al 1992 and for the L-Neuro 2.3 see Deville 1995.
Go to top of page
Sensory Circuits RSC-164 Speech Recognition Chip
Description: "The RSC-164 is a low-cost speech recognition IC designed for use in consumer electronics. It combines an 8-bit processor with neural-net algorithms for high-quality speaker-independent and speaker-dependent speech recognition. The chip also supports voice record/ playback, music synthesis, speech synthsis and system control. The CMOS device includes on-chip RAM, ROM, 16 general-purpose I/O lines, A/D and D/A converters and a 4-MIPS dedicated processor.

The RSC-164 uses a pre-trained neural network to perform speech recognition, while high-quality speech synthesis is achieved using a time-domain compression scheme that improves on conventional ADPCM. On-chip digital filtering improves recognition accuracy by pre-processing incoming signals. Dynamic AGC control compensates for people not optimally positioned with respect to the microphone or for people who speak too softly or loudly.
The RSC-164 includes an external memory interface that allows connection with memory devices for audio record/ playback, speaker-dependent recognition and extended message lenghts for speech synthesis. The RSC-164i does not access external memory, and thus does not provide such features."
See the Sensory Circuits WWW page for more details.
Vendors: E-mail,(408) 744-9000 fax at (408) 744-1299, mail: 521 E. Weddell Drive, Sunnyvale, CA 94089-2164
References: Sensory Circuits WWW page at Also, see Byte Magazine, Vol 20, Num. 12, Dec. 1995, pp.97-104.
Go to top of page
Siemens MA-16 chip, SYNAPSE-3 Neurocomputer
Description: The Siemen's MA-16 chip is a fast matrix-matrix multiplier that can be combined to form systolic arrays, i.e. inputs and outputs are passed from one module to another in an assembly line manner. A single module can process 4 patterns, of 16 elements each (16-bit), with 16 neuron values (16-bit) at a rate of 800 multiply/accumulates/sec at 50Mhz. Weights are loaded from off-chip RAM and neuron transfer functions are calculated with off-chip look-up-tables.

The SYNAPSE-1 is a complete hardware/software system using 8 of the MA-16 chips. It resides in its own cabinet and communicates via ethernet to a host workstation. The SYNAPSE2*PC PCI bus accelerator card containing 1 MA-16, while the SYNAPSE3-PC PCI card contains 2 MA-16.
SYNAPSE3-PC : "Up to three SYNAPSE3*PC-boards can be inserted in a Pentium PC. PC host and board(s) can execute operations simultaneously or independently of each other. The peak performance of one board is aproximately 2560 MOPS (1,28 x 10 E 09 MultAcc/sec); 7160 MOPS (3,58 x 10 E 09 MultAcc/sec) when using three boards! Such configuration obtains a better performance than SYNAPSE1, also with considerably higher i/o-performance and a substantial better price/performance relation." - Ralf Herzog. SYNAPSE3-PC is used as an accelerator for the MediaInterfaces neural network SynUse·Base Software.
Learning: The chip doesn't have any specific algorithm hardwired in but can be programmed to do the various calculations necessary for, say, back-prop. The SYNAPSE cards can be programmed for most any NNW algorithm.
Performance: The SYNAPSE-1 with 8 MA-16's had peak performance of 3.2 billion multiplications (16-bit x 16-bit) and additions (48-bit) per sec. at 25MHz clock rate.  SYNAPSE3-PC: 2560 MOPS (1,28 x 10 E 09 MultAcc/sec); 7160 MOPS (3,58 x 10 E 09 MultAcc/sec) when using three boards
Vendors: SNAT (Siemens Nixdorf, Advanced Technologies), which originally sold the MA-16 products, was closed in fall of 1997. However, the chip is still sold as a component of PCI expansion cards by MediaInterface Dresden GmbH (, Postadresse:  D 01139 Dresden, Washingtonstraße 16/16a, Tel:  +49 351 844 3256,  FAX: +49 351 844 2067, E-Mail: Also, contact  Ralf Herzog, Sales Department,  Tel.: +49 0351 844-2104, e-mail: .
References: Ramacher 1991, Ramacher 1994. The  SYNAPSE3-PC page has several info files available for downloading including a zip file of SYNAPSE3·PC Info (
Comments: SYNAPSE-1 is apparently no longer available. The SYNAPSE2-PC is shown at MediaInterface but seems to be superceded by the SYNAPSE3-PC PCI card. The MA-16 requires considerable micro-coding and other support and so apparently Siemens has decided not to make it available separately from the SYNAPSE-1. However, for research purposes it was possible by one group to obtain the chips. e.g. see Bologna MA-16 VME card and WA-92 NNW experiment.
Go to top of page
Synaptics I10XX Object Recognizer Chip
Description: A customizable chip that includes an area imager, 2 nnw architectures, and a digital interface. It can be used to create object recognizers like characters, patterns or defects, and can process up to 50k images per sec.

Vendors: Synaptics, 2860 Zanker Rd., Suite 206, San Jose Ca. 95134. tel:(408)434-0110, fax: (408)434-9819. See also the entry at the Cognizer site under Synaptics Inc.
Go to top of page
UCLi Ltd. pRAM-256 Chip
Description: "The pRAM-256 is a versatile neural network processor with an on-chip learning unit. It offers the flexibility of a software solution with the speed of hardware. Connections between the pRAM neurons are reconfigurable which allows a network's architecture to be modified at any time. The pRAM-256 can complete one pass of the training process, training all 256 pRAMs, in less than 0.25 ms when operating at the maximum clock speed of 33 MHz. Because of the high number of pRAMs supported by the pRAM-256, a typical neural network can be built using a single pRAM Module. Several pRAM Modules can operate in parallel so that larger networks can be built. The pRAM-256 is fabricated using an advanced sub-micron gate array semi-custom technology from GEC Plessey Semiconductors. The use of a 68 pin PGA package allows a compact neural network to be built into existing and future systems. Interfaces to EISA and VME bus systems have been defined." - from the pRAM WWW page.

Vendor: UCLi Ltd, 5 Gower Street, London WC1E 6HA, Tel: 0171-636 7668, Fax: 0171-637 7921.
References: WWW Page at See also pRAM papers in Clarkson's publication list at
Comments: Developed by Dr T G Clarkson's group at Kings College, the chip has apparently been commercialized by UCLi Ltd. The www page states, "pRAM-256 chips are available for evaluation. Evaluation licence agreements may be requested from UCLi Ltd..."
Go to top of page
Description: Miscellaneous commercial hardware
Go to top of page

BrainMaker Accelerators
Description: BrainMaker Accelerator - nnw development system that operates at 3 million connections/sec. Includes BrainMaker software plus ISA-bus 20MHZ DSP board based on the TMS320C25 chip.

BrainMaker professional Accelerator - nnw development systemt that operates at 40 million connections/sec. Includes software plus two ISA-bus DSP boards and 4 DSP's running in parallel. Includes 5MB RAM (for 600k connections and 250k data points), expandable to 32MB (3.6 million connections and 4 million data points.
They recently announced a BrainMaker Professional CNAPS Accelerator System ($6345-8345) using the Adaptive Solutions CNAPS PC cards. [This probably means they've discontinued their own boards but this is unconfirmed.]
Vendors:California Scientific Software, 10024 Newtown Rd., Nevada City, Ca. 95959-9794, tel: (916)478-9040,(800)264-8112, fax:(916)478-9041.
Go to top of page
Current Technology MM32k
Description: The MM32k is a SIMD parallel computer with 32768 bit serial processing elements, each of which has 512 bits of memory. A single chip holds 2048 PE's. All PE's are inteconnected via a switching network. The MM32k is implemented on a single ISA bus PC card. It comes with a host pc programming environment, including a C++ class library which overloads typical arithmetic operators and supports variable precision arithmetic. Comparison of RBF, Kohnonen and BP processing to a i486 66Mhz show speedups by factors of 161, 76 and 336, resp.; to a 150 MHZ Alpha AXP 31, 11, 35,resp. It is suitable for other problems such as image processing. It costs $5000."

References: See the Current Technology MM32k Home Page
M. Glover & W. T. Miller,"A massively-parallel SIMD processor for neural network and machine vision applications", pp. 843-849, Proceedings NIPS-6, ed. J.D. Cowan et al., Morgan Kaufmann Pub. 1993.
Vendors:Current Technology, 97 Madbury Road, Durham, NH. 03824. tel: (603) 868-2270, Fax:(603) 868-2270. Email: Dr. Michael Glover,
Go to top of page
HNC Balboa * No longer available
Description: Balboa Developer's system is a hardware and software system that provides up to 80-megaflop performance and 20 nnw architectures. Boards include an AT card using a proprietary NN chip based card and a VME DSP based card.

Vendors: HNC Inc., 5501 Oberlin Dr., San Diego Ca. 92121-1718. tel: (619)546-8877, fax: (619)452-6524.
Go to top of page
IBM ZISC/ISA Accelerator for PC
Description: A card with 16 ZISC036 chips has been built for the ISA bus. The chips appear as one ZISC with 576 prototypes. The card comes with a C language library for accessing all the card, training, recall, etc. See the IBM Essonnes Lab ISA Card page.

Vendors:IBM Microelectronics Division, Essonnes Development Lab., IBM France, 225 Boulevard John Kennedy, F 91105 Corbeil-Essonnes, France. Fax:(1)6088-4920.. See the WWW pages for the IBM Essonnes Lab and for the ZISC.
Available in USA via Neuroptics Technologies, USA,131A Stony Circle, Suite 500, Santa Rosa, California 95401,707-578-2310, FAX 577-7424,
Go to top of page
IBM ZISC PCMCIA card + SIZM Single Inline Module
Description: A card with 3 ZISC036 chips has been built for the PCMCIA or PC Card slot. The chips appear as a single ZISC with 108 prototypes. A Single Inline Module similar to single inline memory modules (SIM)is now available also. On a single module are 6 ZISC chips, and multiple modules can be ganged together. The Inline module is described at

Vendors: The card was developed by the French Giat company but is available from Neuroptics Technologies, USA,131A Stony Circle, Suite 500, Santa Rosa, California 95401,707-578-2310, FAX 577-7424, . The Inline module is described at at the IBM Essonnes Lab.
Go to top of page
Mosaic QED Board
Description: This is a general purpose board that can be configured for neural network development. It has digital I/O, 16 analog-to-digital conversion channels, and serial communications. The 3.2x4 inch board includes software support.

Vendors: Mosaic Industries Inc., 5437 Central Ave., Suite 1, Newark, Ca. 94560. tel: (510)790-1255, fax: (510)790-0925,,,
Go to top of page
Neural Technologies - NT5000 & NT6000
Description: NT5000 is a standalone system with co-processor that is initialized and trained via a connection to a PC but from then on can run independently. The networks can have up to 600/4000 neurons (regular or turbo versions). Internal processing is digital but I/O is analog via DAC's and ADC's.

NT6000 is a DSP based system in a PC plug-in card with up to 2700 neurons. The latter also includes analog I/0.
Vendors: Neural Technologies, Ltd., Peterfield, UK, tel:730-260256. Available also from Amplicon Liveline Limited, Hollingdean Rd. Brighton, UK, 8N2 4AW, Sales:0273 570 220, Orders 0800 525 335, Fax 0273 570 215.
Go to top of page
NeuroDynamX Neural-Accelerators ?
Description: NDX Neural Accelerator XR25 - ISA-bus card with Intel i860 XR RISC processor running at 25MHZ to give up to 22.5M connections/sec and up to two banks of 64MB of RAM.

NDX Neural Accelerator XP50 - ISA-bus card with Intel i860 XR RISC processor running at 25MHZ to give up to 45M connections/sec and up to eight banks of 64MB of RAM.
Vendors: NeuroDynamX, P.O. Box 323, Boulder, Co.80306. tel: (303) 442-3539, (800) 747-3531, fax: (303)442-2854.
Go to top of page
Nestor ISA, PCI and VME Cards
Description: Nestor now has several cards with the Ni1000 chip:

Ni1000 Development System: A PC based development system is currently available including an ISA card with a single Ni1000, and interface software. Accelerates DOS and Windows emulation performance by 25x. Hardware Library and C source code license provided. Assembler and emulator included to load new algorithms into microcode.
PCI4000 Recognition Accelerator: this PCI card holds 1 to 4 Ni1000 chips to provide 10K to 80K patterns per second performance. PCI bus allows the card to support data transfer rates of 64MB/s and 128MB/s. The NestorACCESS provides a Windows 3.1 development environment.
VME4000 VME Card: This 6U size card (form a collaboration of Nestor and Alta Technology) holds 1-4 Ni1000 chips running at 25MHz. It provides a local control processor and local dual-ported memory. A C language based software environment is included.
Learning: Two on-chip learning algorithms are available: the Probabilistic Neural Network (PNN) and the Restricted Coulomb Energy (RCE). Also, the micro-coding can be modified for user defined algorithms.
Performance: The ISA card performs 1500 patterns per sec at 33MHz(no external inputs available on this card to take advantage on 40K pattern/sec chip capability.) The PCI4000 delivers up to 80K patterns per sec with 4 chip version.
References: See Nestors hardware page for a full description of their hardware systems. Also, the paper "A better way to implement high-speed artificial neural networks", VMEbus Systems / Vol.12 No.3 pp.23-51.
Vendors: Nestor, Inc., One Richmond Square, Providence, RI 02906, USA. Tel:(401)331-9640; fax: (401)331-7319.
Thinking Silicon Ltd., Courtland, Latchmoor Way, Gerrands Cross, Buckinghamshire, SL9 8LW, England, tel:+44(0)1753 891722. Contact: John Greenway at
Go to top of page
Rapid Imaging VME Ultima and 0491E1-ISA Cards * No longer available
Description: VME Ultima is a general purpose VME card based on the Intel ETANN chip. Up to 128 inputs, on board learning, analog-in, analog-out real-time processing.

0491E1-ISA is a general purpose PCX card also based on the Intel ETANN chip.
Vendors: Rapid Imaging Inc. 5955 T. G. Lee Blvd., Suite 150, Orlando, Fla. 32822, tel: (407)851-3163, fax:(407)282-0242
Go to top of page
Sundance SMT306 Neural Processing 'C40 TIM
Description: "The SMT306 is industry standard size 2 TIM-40 module conforming to the latest specifications for TIM modules. By using two of the world Neural Instruction Set Processors, NiSPs, in conjunction with the TMS320C40 ('C40) parallel DSP, a balance of computational performance and data transfer bandwidth is achieved.

Each NiSP device offers a peak computation rate of 40Million Interconnects/s allowing the potential to build real-time systems using the significant advantages offered by neural network techniques.
The TMS320C40 offers a data transfer bandwidth of approaching 120MBytes/s via it's six on-board communications ports (comm-ports) and six DMA engines. The 'C40 also has a modified Harvard Architecture with a CPU capable of 50MFLOPs peak performance."
For full details see their WWW page.
Vendors:Sundance Multiprocessor Technology Limited, 4 Market Square, Amersham, Bucks HP7 0DQ United Kingdom. Telephone: +44 (0)1494 431203 Fax: +44 (0)1494 793168 email: WWW page at
Go to top of page
Telebyte Model 1000 NeuroEngine ?
Description: "The Model 1000 NeuroEngine is a coprocessing board which plugs into a PC and brings high speed neural network processing to the desktop. The board is based upon unique parallel processing concepts developed by AAC. The unit is a multiple instruction multiple data (MIND) stream coprocessor for the PC. The unique MIND parallel processoing architecture takes advantage of sparse topology to obtain high computational efficiency. This single card fits into a PC and operates under DOS. The board is capable of running 140 million connections/s.", Electronic Product News, July/August 1994, Vol.23-No.7/8, p.20.

Vendors: Telebyte Technology Inc, 270 Pulaski Rd., Greenlawn, NY 11740, USA. 1-800-835-3298, (516)423-3232, fax:(516)385-8184,(516)385-8184. and the catalog page on the T1000 NeuroEngine Board
Go to top of page
Vision Harvest NeuroSimulator?
Description: ISA-bus coprocessor board using Intel i860 RISC chip. Runs under NeuroVision software and achieves up to 30M connections/sec.

Vendors: Vision Harvest, HCR Box 36, Hatch, N.M. 87937. tel:(505)267-1014, (800)733-4207, fax:(505) 267-1015.
Go to top of page
Ward Systems NeuralBoard?
Description: ISA-bus using 25Mega-flop 50MHZ RISC processor. Under their NeuroWindows NNW software, multiple networks can run on one board and as many as 10 Neuroboards can be installed on one PC.

Vendors: Ward Systems Group, Executive Park West, 5 Hillrest Dr. Frederick, Md. 21702. tel:(301) 662-7950, fax: (301) 662-5666.
Go to top of page

Bellcore CLNN learning chip
Description: A group at Bellcore has developed a prototype chip with 32 fully interconnected neurons that uses Boltzmann Machine and Mean Field learning algorithms on-chip. The inputs and outputs are analog, while the weights are digital 5-bit. A separate synapse only chip allows for building of multi-chip systems. They have built a workstation co-processor board with 4 chips consisting of 128 neurons and 2000 synapses.

Learning: Boltzmann Machine and Mean Field learning algorithms on-chip
Performance: 100 Million Connection-updates/sec. The annealing time can be varied between 10-100 microsecs per pattern depending on learning accuracy vs processing speed trade-offs. The chip can also be run in strict feed-forward mode for 3-micro-sec per pattern performance.
References: Alspector 1991, JayaKumar 1992, JayaKumar2 1992.
Go to top of page
Bellcore cascadable VME Card with 3 ETANN's
Description: A prototype VME card that holds 3 ETANN chips, one per layer of a feed-forward network. Multiple boards can be cascaded together to build very wide structure networks.

Learning: No learning algorithms are implemented on the board. Coarse weight values can be set on the board but the elaborate recursive algorithm needed to program fine weight settings has not been written.
Performance: By using one chip per layer instead of using a single chip and clocking the 1st layer outputs back through the 2nd weight matrix, there is a reduction of about 25% in the processing time (e.g. about 5-6nsecs instead of 7-8ns, depending on the network.)
Maker: Not a commercial board. A board or schematics might be available for research projects.
References: Lindsey 1994.
Go to top of page
Description: The M1718 Digital Neural Network from Hughes Semiconductor Products Center is a static CMOS neural network subsystem. It is configured as 1024 (1056) weights supporting 32 8-bit inputs which are fully connected to 32 internal nodes. Each weight is 4 bits wide and multiple devices can be combined to expand the network with respect to inputs and layers. It is claimed that 60,000 patterns per second can be processed independent of network size. Two chips can form

32 inp 32 hidden 32 outputs
32 inp 32 outp single layer
64 inp 32 outp single layer
Six chips can form
64 inp 64 hidden 32 outp.
The chip operates at 15 MHz and at this frequency the non-linear matrix multiplication can be performed in 18 microseconds. It is also claimed that provision is made for on-chip weight storage and that a simple means for quickly changing those weights both during normal operation and during chip-in-loop training is provided.
Vendors:Hughes Semiconductor Products Center, but latest info wais that the chip canceled.
Go to top of page
Description: A chip was in development but was canceled. Motorola is now in a collaboration with Adaptive Solutions..

Go to top of page
Description: A neural net chip aimed especially at HEP applications. The processing through 2 layers is 20nsecs! The chip has 70 analog inputs, 6 hidden and 1 output neuron. The weights are 5-bit digital. All 426 weights can be set in 5ms. The hidden layer neurons have sigmoid transfer functions with user setable gains while the output neuron is a linear function of the weighted hidden neuron outputs. So multiple chips can be ganged together. An external threshold discriminator on the output would be needed to give a binary classification.

Learning: No on-chip learning
Performance: 25 Billion mult/accumulates per sec. 5GBytes/s input bandwidth.
Address: Peter Masa, MESA Research Institute, Univ. of Twente, P.O. Box 217, 7500 AE Enschede, Netherlands, Tel:31 53 89-2753, Secr: 31 53 892644, fax: 31 53 309247,,
References: NeuroClassifier Home Page. Also, Masa 1994 and there is a Preliminary Technical Documentation booklet available.
Comments: A university type research chip but some of the chips may be available for a fee. An earlier chip with fixed weights and 20nsec processing time was also built.
Go to top of page
Ricoh RN-200
Description: After the presentation of the RN-100 (12 MHz) single neuron chip at Seattle 1991 (Eguci 1991) Richo developed a multi-neuron chip called RN-200 with 16 neurons and 16 synapses for each neuron in fully digital technique. The chip has on-chip learning abilility using a proprietary backprop algorithm. It comes in a 257-pin PGA encapsulation and develops 3.0 W (max).

Learning: 1.5 -3.0 Giga Connections per second) 1GCPS at 32MHz. A 16-16-1 network runs forward processing in about 2microsecs.
Performance: 1.5 -3.0 Giga Connections per second)
Vendors: Richo Co, Ltd, Yokohama, but latest info is that the chip is not commercially available or, at least, restricted to Japan.
Go to top of page
Univ. of Paderborn Pulse Coded Neural Network Accelerator
Description: To accelerate the performance of Pulse Coded NN (PCNN) simulations, a 5 module system was developed by the Computer Vision Lab of the Dept. of Electrical Engineering at the Univ. of Paderborn, Germany. (The PCNN is more closely related to biological neural systems than typical ANN models and is becoming increasingly popular, especially for artificial vision applications.) The modules reside in a box attached to a host Unix machine. The system consists of communication, initialization, connection, neuron, and learning modules. The digital system was implemented primarily with FPGA's. Hebbian style learning is implemented in the hardware. A single system can hold up to 128k neurons and up to 16M synapses.

References: Prof. Dr. rer. nat. Georg Hartmann, University of Paderborn, Department of Electrical Engineering, Pohlweg 47-49,33098 Paderborn, Germany. Tel:++49/ 52 51 / 60 22 06, Fax:++49 / 52 51 / 60 32 38. For online information, see the Computer Vision Lab web site. Detailed info on the hardware is at their Hardware Design page. See also the publications list.
Go to top of page
Univ. Roma Circuit Labs Cellular Neural Network Chips
Description: The 3x3DPCNN and 6x6DPCNN chips implement the Chua's model of Cellular Neural Network (IEEE Trans. on CAS, Oct. 88) with 9 and 36 cells respectively. Their main feature are:
The 3x3DPCNN chip has been used to design two PC boards : the first one with two chips (18 cells) and the second one with nine chips (81 cells). A new Board with 720 cells has been designed with 20 6x6DPCNN chips. At the moment it is under construction.
Maker: University of Roma "Tor Vergata", Department of Electronic Engineering, Via della Ricerca Scientifica - 00133 Roma ITALY, WWW CirLab (Circuits Laboratory) pages at Also, Fausto Sargeni at
Go to top of page

Go to top of page 

Bologna Univ. ETANN and MA-16 VME Cards
Description: For the WA-92 Neural Network Demonstration experiment, 2 neural network VME cards were developed: one with an ETANN and one with a MA-16 chip. The MA-16 card was not finished it time for the data taking run but was later finished for benchtesting.

The ETANN card has a 64 DAC's for digitizing input data to the ETANN and 32 channels of ADC's on the outputs. To insure maximum stability, the ETANN temperature was stablized by a Peltier cell attached to the chip. The board includes the timing signals necessary to perform 2-layer processing.
The MA-16 card accomadates 16 inputs and can process a 2-layer network with up to 15 hidden units and 1 output.
Learning: No learning algorithms on either board.
References: Baldanza 1993 and a WA-92 NNW experiment
Go to top of page
CDF/Michigan ETANN Fastbus Cards
Description: Cards built to implement the CDF neural network triggers. A card holds a single ETANN and can do 2-layer feed-forward processing. ETANN control voltages are also externally setable. The boards are fed via the front panel with 50 analog calorimeter tower signals.

Learning: No learning but weight setting is available.
Performance: Boards have been used in 2 CDF runs with no serious problems.
Makers: The boards were built by the Univ. of Michigan CDF group led by Myron Campbell (
References: Badgett 1992.
Go to top of page
CP-LEAR/Basel ECL Cards and DSP NNW's
Description: A NNW card implemented with descrete ECL components can find tracks in the CP-LEAR detector within 75nsec. Each card receives hitmap information from 5 sectors of the detector spanning 11 layers for a total of 5x11=55 binary inputs. A 55-2-1 multilayer network was trained (off-line) by BP to indicate the presence of a track (out=1) or not (out=0). To To achieve high speeds, both the Input x Wt multiplication and transfer functions are carried out by lookup tables in RAM. The VME cards recieve input from overlapping sectors. 18 cards, covering a quarter of the detector, have been built and tested.

A commercial DSP card, the Motorola DSP96002, has been used to implement a NNW to identify neural kaon events using data from the tracker. The 12 input, 10 hidden and 1 output network runs in 40microsecs.
Performance: Comparison of the NNW tracking to the conventional methods currently used by CP-LEAR show that it is "better in selection performances as well as in execution time". The extensive off-line tracking is "only marginally better in selecting good events" than the NNW online tracking.
The DSP neutral kaon id network "performs better than classical selection methods in real-time applications". The hardware implementation "provides considerable on-line rate reduction in tagged neutral kaon experiments."
Makers: The boards were built by the Univ. of Basel group (led by P. Pavalpoulos) and CERN (H. Wendler)
References: Athanasiu, Leimgruber, and Pavalopoulos.
Go to top of page
H. Haggerty discrete component network.
Description: For some applications, the weights may never need to be retrained after the patterns are learned. In that case, the NNW circuitry could use fixed weights and avoid the complexity of programmable synapses. Here a test network using analog inputs, fixed resisters for weighting and differential amplifiers, with sigmoidal type transfer functions, was built. The network found track parameters from signals coming directly from a D0 muon chamber. Such networks could be very fast and fairly inexpensive.

Reference: Haggery 1992.
Go to top of page
Description: A VME card for the ETANN was built using an ELTEC prototyping card with a 68070 cpu on board. Sets of 64 DAC channels and 64 ADC channels, under the control of the 68070, were added to provide analog inputs and to digitize the outputs of the ETANN. The ETANN sat in a separate piggyback card. The ETANN could both receive inputs from the DAC's, using values passed via the VME bus, or directly in the front panel. Similarly, the outputs could be digitized by the ADC's or passed out a front connector.

Learning: No learning algorithm on the board.
Makers: Th. Lindblad' group at Royal Institute of Technology, Dept. of Physics Frescati, Frescativaegen 24, S104 05 Stockholm, Sweden. (
References: Molnar 1994
Go to top of page
Description: A PC/ISA Card with 2 of the IBM ZISC chips has been built. A VME card with 4 ZISC's is currently being debugged. The PC card is intended primarily as a way to become familiar with the chips. The ZISC chips themselves are attached to small "piggyback" cards. The ZISC's are electrically very easy to cascade so these piggy-back cards were designed to be stacked to increase the number of available prototype neurons.

Makers: Th. Lindblad's group at Royal Institute of Technology, Dept. of Physics Frescati, Frescativaegen 24, S104 05 Stockholm, Sweden. (
References: Eide 1994 , Lindblad 1994
Go to top of page
Other Go to top of page

Page statistics since May 1996.
NNW in HEP Home Page 
Authors: Clark S. Lindsey , Bruce Denby , & Thomas Lindblad

Curator: Clark S. Lindsey (