Tuesday, 15 December 2015

C Programming a Combinatorial Problem Solver

C Programming a Combinatorial Problem Solver

C Programming a Combinatorial Problem Solver

Well this is another interesting topic and very important to making Neural Networks useful and controlling better how they behave.

As with recognising images and features within images we first presented the network with primitive forms that we built from using the Laplacian Gaussian edge detector. The real power of these neural networks is that they can do a similar operation on the image to find its features and classify them. Imagine for a minute that this is the same task as the convolution matrix performs in edge detection only instead of lots of iterations throughout the image being done one after another the Neural network does these iterations all at once by using its interconnectivity.

It assigns groups of neurons to deal with each part of the image and these groups are called receptive fields responding to only one area of the image and sharing processing of that area with others whose fields overlapp until all the image is processed.

Right now how does this happen how does the network know which neurons are dealing with which segment of the image - it doesnt. It just leanrs to associate solutions to the image that we want it to. Or in other words if we train it to recognise cats it will with varying degrees of success depending on how well it is trained and what primitive information it is trained with.

From this we can gather that it must change alot as networks must change their connections in order to assign receptive fields to different parts of the image or different features of the image. Here begins the job of the combinator. Its like the hardwiring in our brains to recognise shapes and edges and pick out only the useful information that we need.

The combinator will arrive at a solution for an optimal neural network that will later be trained to recognise cats and dogs. But first we need it to score well on basic shapes and primitve features of the image - this is our hard wiring.

Here is a sample of code to get you started on combinational logic. This combinator will select the optimal connectivity and number of neurons in each group or assembly that will constitute our receptive fields for each feature of the image.

Once we have a good working combinator that can be tweeaked and is flexible for a variety of 'neural topology's' - thats the form and the shape of our artificial brain, then we can use it to guide our formless networks into behaviours that once trained will be able to recognise exactly what we want in an image and rebuild the image into something useful.

For example a neural network trained on primitives of dog's and cats could differentiate between them after being trained and initiated with a combinator. More simpler a neural network hard wired for recognising letters and handwriting. But hold on a minute before I copy this code over, what are the primitives from dogs and cats?
Thinking as a machine..

Dogs / Cats
Colour Colour X -

Well this is no good because they often have the same colour and colour doesnt distinguish them when this might vary due to lighting

Size Size / -
This is a good one Dogs and cats dont always have the same size - dogs are generally larger and cats smaller but that again could be dependant on the distance from the camera
We are looking for invarients
Shape Shape


Well this is invarient to size we could take a large dog and make it the same size as a kitten we could blur the image with heuristic and then use another heuristic to find the centers for all the pixels that belong to the blurred image of the dog or cat then we could draw a line between the centers and measure relative distances between them. This would be and invarient that we could use to recognise and differentiate between them.


Anyway heres my code for a basic Combinator.



1. The function you need for combinational logic is based on Factorials and is derived from POLONOMIAL's - to find the number of combinations of M items in N you need the following formula:

n! / ((n - m)!*m!)

The n! means n x (n-1) x (n-2) .. x (n-n+2) its basically a way to find all the combinations of N items or put another way how many ways can you combine N number of neurons that is then divided by the number of ways you can combine M number of N which is (N-M)! x M!

And my presumption is there are similar formula's for other more complex combinations not listed on Wikipedia!!

Here is the basic one for a combinator with M in N which would transpire to M neurons in a N neuron network such that you'd get the right number of M numbered groups of neuron to construct a receptive field containing say 5 of this M sized groups for an image that contains 1000,0000 pixels segmented into 100 100x100 segments

Ofcourse you can then extend this formula with the added features that each group can possess any number of neurons between 5 and 10 say with the goal of finding the right amount of variability to allow for optimal /correct performance and limiting the number of total neurons or layers in the network. Less is more. Why have a Network that can do the same with more neurons simply cos you didnt find the time to build a combinator(TM)!

Once you have this you can then started PRUNING your neural network until you have it hardwired.

Here's the code to get you started.

Code:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
    44
int No = 100; // number of subfields or neural assembly's
int Nc = 10;  //Number of neurons in each Assembly k
int Ntot = 50; //Total number of Neurons  n
int Layers = 2;
/*Example using letters of alphabet*/
char combin_array[] = {'a','b','c','d','e','f','g','h'};
cout << combin_array[2]<<"\n";
/*First calculate how many possible combinatiosn there are for 3 in 8*/
/*Combinatorial Calculator n! / ((n - m)!*m!)*/
int output[3];int ncomb_int;
int repeat,match,n_int=6,k_int=3;
cpp_int k = k_int;
cpp_int n = n_int;
/*Calculate Factorials*/
cpp_int n_fact=n;
for(int h=1;h<n;h++){
n_fact *= n-h;}
cpp_int k_fact=k;
for(int h=1;h<k;h++){
k_fact *= k-h;
}
cpp_int nk_fact=n-k;
cpp_int nk = n-k;
for(int h=1;h<nk;h++){
nk_fact *= h;}
cpp_int n_comb;
n_comb = n_fact/(k_fact*nk_fact);
//int n_comb = n!/((n-k)!*k!);
for(int i=0;i<n_comb;i++){        //trick to convert cpp_int to int
ncomb_int++;
}
cout <<"N! = "<<n_fact<<" K! = "<<k_fact<<" (N-K)! = "<<nk_fact<<"\n";
cout <<"Number of combinations of "<<k<<" in "<<n<<"="<<n_comb<< "\n";
Now in order for this to work in C , computing factorials need's alot of integer!! So weve got this library which when you include it you can make purty big numbers!!

#include <boost/multiprecision/cpp_int.hpp>

2. Ok once this is taken care of you have the value for N and that tells you roughly how big your network need's to be. How many neurons is N and then you have n_comb, well thats just my notation and your just going to search with different values of N and K until you get just over the number of combinations you need in n_comb.

Once you know what the values for N and K are (dont forget K has to be between the number of neurons that will process one segment of your image successfully and some upper limit or lower limit to your choosing), you can then apply this second bit of code to generate the combinations. In this example because we are only using the above simple formula of
N!/(N-M)!*M! we only get combinations with multiple repetitions ideally you would adapt the formula to get a variable and controlled number of repetitions. Heres the code for generating the combinations with repetitions:

Code:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
    38



ifgnt combinator[ncomb_int][3];
int select,selectold;
int count=0;
do{
for(int i=0;i<k;i++){
select=rand() % 8;
//cout<<combin_array[select]<<"\n";
if(selectold==select){i--;}else{
output[i]=select;
selectold=select;        }
}
repeat=0;
for(int j=0;j<count;j++){
match=0;
for(int m=0;m<k;m++){
if(output[m]==combinator[count][m]){
match++;
}
if(match==3){repeat=1;j=count;}
}
}
if(repeat!=1){
for(int q=0;q<k;q++){
combinator[count][q]=output[q];
        }
count++;     }
}while(count<n_comb);
cout<<"Last 10 combinations\n";
for(int i=ncomb_int-10;i<ncomb_int;i++){
cout<<"{";
for(int j=0;j<k;j++)   {
cout<<"{"<<combin_array[combinator[i][j]]<<"},";
            }
cout<<"}\n";            }

Check Out this sponsor!

bestofnet.just4books.co.uk/

C Programming an image processing tool.

C Programming an image processing tool.

I wanted to share this because I thought it was worth telling people about. I have written new Algorithm that is based on the Convolutional Neural Networks.

Its not finished yet. But I have here included the first stage in a set of simple functions for preparing the image for input to the Neural Network. I can also provide a code snippet for the NN which I think is one of my best. I have been writing C/C++ coded algorithms for a while and find it very enjoyable. Here is the result of a simple convolution mask. It uses the Laplacian Gaussian double differential which sounds like a mouthful but basicly it filters the image into Zero-crossings. These Zero crossings then provide the rest of the algorithm which will first Cluster these Zero crossings and then classify them as features using the neural network. Its exiting stuff and I just wanted to share my enthusiam for this topic which I know is very popular for people interested in AI for example charactor recognition etc.. Also it is helpful to know that you can do this easily in C because it means you dont have to rely on a photo editing package that doesnt always deliver the reults you want!

So here is a snippet from my code.

Part 1 - A function designed to apply a convolutional filter to an image array in C programming (Easy)
Code:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
void laplac_edge_filter(double gauss, int *image, int size){
int yl=0,xl=0;
int pos,fpos[3][3],fcon[3][3];  //using filter size 3x3
//change this with different values for gauss from equation also size of convolution matrix
fcon[0][0] = 0;   fcon[1][0] = -1; fcon[2][0] = 0;
fcon[0][1] = -1;  fcon[1][1] = 8;  fcon[2][1] = -1;
fcon[0][2] = 0;   fcon[1][2] = -1; fcon[2][2] = 0;
for(int y=0;y<sqrt(size);y++){
for(int x=0;x<sqrt(size);x++){
xl++;
if(xl==1){
xl=-1;
pos  = (y-1) * sqrt(size) +x;
fpos[0][0] = (y-2) * sqrt(size) + x-1;
fpos[0][1] = (y-2) * sqrt(size) + x;
fpos[0][2] = (y-2) * sqrt(size) + x+1;
fpos[1][0] = (y-1) * sqrt(size) + x-1;
fpos[1][1] = pos;
fpos[1][2] = (y-1) * sqrt(size) + x+1;
fpos[2][0] = y * sqrt(size) + x-1;
fpos[2][1] = y * sqrt(size) + x;
fpos[2][2] = y * sqrt(size) + x+1;
for(int i=0;i<3;i++){
for(int j=0;j<3;j++){
if(fpos[i][j]>0&&fpos[i][j]<size){
image[fpos[i][j]] *= fcon[i][j]; }
            }
            }
            }
                }                       
                }
}
Ofcourse this function is quite easy and straightforward the difficult thing will be later on I will try to ammend this function to derive the convolutional mask or array that is aplied to the image.

Part 2 - I thought this might be helpful as it shows an easy way to read and write images saved as pgm ascii files - so take a photo convert to greyscale if you wish and then save as ascii pgm and this will easily read into an array. Its basic as it gets a good primer to Image processing in C though.


Code:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
void load_image(int * size,const char *fname,int *array){
char c,*l;
int lines=0;
int count=0,i,x;
FILE *fh;
//*array=new int[size*size];
fh = fopen(fname, "r");
if (fh == NULL) {
printf("Unable to open file %s\n",fname);
exit(1);
        }
     
while  ((c=fgetc(fh)) != EOF)   {
if(c == '\n')lines++;
if(lines>3){
fscanf(fh,"%d",&x);
array[count]=x;
count++;
//printf("x=%d\n",x);
       }
*size = count;
                }
fclose(fh);
                      }
             
void save_image(int size,const char *fname, int *array){
char c,*l;
int count=0,count2=0,i,x;
FILE *fout;
fout = fopen(fname, "w");
if (fout == NULL) {
printf("Unable to open file %s\n",fname);
exit(1);         }
fprintf(fout,"P2\n# CREATOR: Edge Detect 0.1\n100 100\n");
for(int i=0;i<size;i++){
fprintf(fout,"%d\n",array[i]);
}
fclose(fout);
                       }

Part 3 - Now when you go hunting for Zero-crossings in an image that have value indicating a feature it would help if you could aswell as filtering the image you can threshold it into a set of binary intensity levels 1,0 . Thresholding is also very easy:

Figure 1. A thresholded image after application of the Laplacian Edge detector's convolution matrix.





As you can see from images the kernel is not powerful enough to perform an edge detection on the image this will require the function to deliver the 2nd differential of the Laplacian of Gaussian function :



Code:
?
1
2
3
4




for(int i=0;i<size;i++){ 
if(image[i]>=threshold){image[i]=0;}else{
image[i]=1;}
}





Part 4 - Here is just a few of the functions a sneak preview of the final Algo the main engine that will do the work at classifiying the Images features after the clustering algorithm has done its work.

A. First start by laying out the Class structure of the Neural Net Algo:

Code:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class neuron{
private:
public:
double *dG;   //DeltaG
double *Wt;  //Weight Matrix
double O;    //Theta - Bias for ith unit
double s;    //Activation of ith Unit
double p1,p2;    //probability of node relative to phase
void init(int nodes){
//number of nodes in previous layer memory for weights and delta's
dG= new double[nodes];
Wt= new double[nodes];
}
};
class Nets_and_Boltz{
private:
int numlayers;
int *numnodes;   //topology of net
int numinputs;
double *inputs; //input array use for normalization of data prior to input
//;visible inout units
neuron *vnode;
//;hidden units
neuron **hnode;
public:
//;coeffs
double R;    //lrate
double T;    //Temp
double K;    //Boltzman Constant
int **comb_array; //neural assembly array stores which neurons are in each assembly
B. Then after you initialise all of these structures and arrays ie give them memory we can then write an update function:

Code:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
double update_net(double *in, int phase){
double deltaE,gE=0;
double prob;
//;update feedforwrd Si = Sum WijSi - 0i (1,0)
//; 0i = threshold for i
//; Si = activation for ith unit
//; Wij = weight between unit Si and Sj
//; use weight matrix?
//; Delta Ei = Sum WijSj + 0i
//; Prob i=on = 1/1+ exp(-Delta Ei/T) logistic function
//; Si = 1 if PseudoRand >= Prob i else Si = 0
for(int i=0;i<numnodes[0];i++){
if(phase==1){
vnode[i].s = in[i];   // 0 or 1
        }else{
vnode[i].s = (rand()%2)-1;        //random -ve phase       
        }              }
for(int i=1;i<numlayers;i++){
for(int j=0;j<numnodes[i];j++){
for(int k=0;k<numnodes[i-1];k++){
if(i==1){
deltaE += hnode[i][j].Wt[k]*vnode[k].s + hnode[i][j].O;
gE += hnode[i][j].Wt[k]*vnode[k].s*hnode[i][j].s + vnode[k].O;             //calculate global energy of machine
}else{
deltaE += hnode[i][j].Wt[k]*hnode[i-1][k].s + hnode[i][j].O;
gE += hnode[i][j].Wt[k]*hnode[i-1][k].s*hnode[i][j].s + hnode[i-1][k].O;   //calculate global energy of machine
}
                }
prob   = 1/(1+exp(deltaE/T));
if(prob*100 >= (rand()%100+1)){
hnode[i][j].s  = 1;}else{
hnode[i][j].s  = 0;
}
if(phase==1){
hnode[i][j].p1 = prob;}else{
hnode[i][j].p2 = prob;                            //calc Pion for different phases
}
deltaE=0;
                }
                    }
return gE;
}

Fig 1. This is a very nice Error curve for a similar Algorithm that also uses receptive fields. This gradient can be managed by the number of Neurons in the hidden layer.


This is the Update function and Class structure for a Stochastic neural network which uses a Temperature varied function to change the state from a high energy to a low energy. Once in a low energy state gradualy reached through a process called Simulated Annealing a classification of the image's features is then reached which yeilds the most information.

A process similar to Simulated Annealing is now being used by the very first Quantum Computers or Q-bits that use Quantum Annealing to solve complex problems in Machine learning.

A good example is finding the correct solution to a non-linear equation f(x) = y^2 or the famous X-or problem both are non-linear as is the classification of features in an image. A neural network is useful at solving these non-linear problems. But it helps to prepare the way for this solutions by using linear problem solvers or heuristics like the image filter or laplacian edge detector to normalise the image so that the neural network has less chance of making a mistaken classification.

For reading about my favourite Science Fiction or New Fiction Cyberpunk please checkout my Website at Spacefarm dot org dot UK and read jUice dot extramindcorp dot com

If you liked this post be sure to read the next one entitled:

C Programming a Combinatorial Problem Solver


spacefarm.org.uk