|
1 | 1 |
|
2 | | -Atomse SIMD (OpenCL/CUDA) Interfaces |
| 2 | +Atomese SIMD (OpenCL/CUDA) Interfaces |
3 | 3 | ==================================== |
4 | 4 | Experimental effort to enable I/O between Atomese and SIMD compute |
5 | | -resouces. The primary target is commodity GPUs, running either OpenCL |
| 5 | +resources. The primary target is commodity GPUs, running either OpenCL |
6 | 6 | or CUDA. The current prototype accesses the hardware at a low level, |
7 | 7 | working with individual vectors. High-level API's, such as commonly |
8 | 8 | used in deep learning, are not supported in this interface layer. |
9 | 9 |
|
10 | | -[Atomese](https://wiki.opencog.org/w/Atomese), the interface language for |
11 | | -the OpenCog [AtomSpace](https://github.com/opencog/atomspace) hypergraph |
12 | | -database, has a variety of different ways of talking to external |
13 | | -subsystems. These include: |
| 10 | +The experiment, as it is currently evolving, is to understand graph |
| 11 | +rewriting between "natural" descriptions of compute systems. For |
| 12 | +example, the "natural" description of a deep learning transformer is |
| 13 | +a kind of wiring diagram, indicating where data flows. By contrast, the |
| 14 | +"natural" description of GPU compute kernels are C/C++ function |
| 15 | +prototypes. How can one perform the rewrite from the high-level |
| 16 | +description to the low-level description? Normally, this is done "brute |
| 17 | +force": a human programmer sits down and writes a large pile of GPU |
| 18 | +code, creating systems like PyTorch or TensorFlow. The experiment here |
| 19 | +is to automate the glueing together from high-level to low-level |
| 20 | +descriptions. |
| 21 | + |
| 22 | +Such automation resembles what compilers do. For example, a C++ compiler |
| 23 | +accepts C++ code and issues machine assembly code. A Java compiler |
| 24 | +accepts Java and issues Java bytecode. The goal here is to accept |
| 25 | +high-level functional descriptions, such as that of a transformer, and |
| 26 | +to generate low-level invocations of GPU kernels. However, the project |
| 27 | +here is not meant to be a "compiler" per se; if that is what one really |
| 28 | +wanted, one could just brute-force write one (or use PyTorch, |
| 29 | +TensorFlow, ...) Instead, this is meant to be an exploration of graph |
| 30 | +rewriting in general, and the GPU target just happens to be a realistic |
| 31 | +and suitably complicated example target. |
| 32 | + |
| 33 | +A different way of thinking of this experiment is as an exploration of |
| 34 | +the agency of sensori-motor systems. The GPU is a "thing out there" that |
| 35 | +can be manipulated by an agent to "do things". How does one perceive the |
| 36 | +"thing out there", and describe it's properties? How can one control it |
| 37 | +and perform actions on it? To extert motor control on that "thing out |
| 38 | +there"? How does one perceive the results of those motor actions? |
| 39 | + |
| 40 | +In the present experiment, the "agent" is is a collection of graphs |
| 41 | +(properly, hypergraphs), represented using |
| 42 | +[Atomese](https://wiki.opencog.org/w/Atomese), |
| 43 | +and "living" in the OpenCog |
| 44 | +[AtomSpace](https://github.com/opencog/atomspace). |
| 45 | +That is, the subjective inner form of the agent is a world-model, |
| 46 | +consisting of abstractions derived from sensory perceptions of the |
| 47 | +external world, and a set of motor actions that can be performed to |
| 48 | +alter the state of the external world. The agent itself is constructed |
| 49 | +from Atomese; the GPU is a part of the external world, to be manipulated. |
| 50 | + |
| 51 | +The abvoe description might feel like excessive anthropomorphising of |
| 52 | +a mechanical system. But that's kind of the point: to force the issue, |
| 53 | +and explore what happens, when an agentic viewpoint is explicitly |
| 54 | +forced. |
| 55 | + |
| 56 | +[Atomese](https://wiki.opencog.org/w/Atomese) is the interface language |
| 57 | +for the OpenCog [AtomSpace](https://github.com/opencog/atomspace) |
| 58 | +hypergraph database. It has a variety of different ways of talking to |
| 59 | +external subsystems. These include: |
14 | 60 |
|
15 | 61 | * The [GroundedSchemaNode](https://wiki.opencog.org/w/GroundedSchemaNode) |
16 | 62 | allows external python, scheme and shared-library functions to be |
@@ -81,8 +127,8 @@ residing in the AtomSpace. |
81 | 127 |
|
82 | 128 | New in this version: Atomese interface descriptions are now generated |
83 | 129 | for OpenCL kernel interfaces. This should allow introspection of the |
84 | | -interfaces, and their manipulation in Atomese. Well see how that goes. |
85 | | -The groundwork is there. |
| 130 | +interfaces, and their manipulation in Atomese. We'll see how that goes. |
| 131 | +Some problems are already visible. But some groundwork is there. |
86 | 132 |
|
87 | 133 | The demo is minimal, but it works. Tested on both AMD Radeon R9 and |
88 | 134 | Nvidia RTX cards. |
@@ -127,7 +173,7 @@ Steps: |
127 | 173 | * Build and install cogutils, the AtomSpace, sensory and then the code |
128 | 174 | here. This uses the same build style as all other OpenCog projects: |
129 | 175 | `mkdir build; cd build; cmake ..; make; sudo make install` |
130 | | -* Run unit tests: `make check`. If the unit tests fail, that's proably |
| 176 | +* Run unit tests: `make check`. If the unit tests fail, that's probably |
131 | 177 | because they could not find and GPU hardware. See below. |
132 | 178 | * Look over the examples. Run them |
133 | 179 | `cd examples; guile -s atomese-kernel.scm` |
|
0 commit comments