Tuesday, December 13, 2016

How to extend Python on Windows (Deep Learning Course on Udacity related)

I am recently learning Deep Learning from Udacity.

For 1_notmnist.ipynb, Problem 1: Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.

One alternative way might be the following code:

import os, fnmatch

img_files = []

def all_img_files(img_files, search_path, pattern = '*.png'):
    for path, subdirs, files in os.walk(search_path):
        if files and fnmatch.fnmatch(files[0], pattern):
            img_files.append(os.path.join(path, files[0]))
            break;
                
for folder in train_folders:
    all_img_files(img_files, folder)
        

for folder in test_folders:
    all_img_files(img_files, folder)
      
for img in img_files:
    Image(filename = img)

However I found it's extremely slow, probably due to every sub-directories and files will be gathered on os.walk returning. the break statement has only a little affect on the whole processing time.

So I decide to write some code which genuinely fetches the first png file in each A-J directories respectively. Readers can follow the below linkage for reference on how the work can be down via VC++:

There's plenty of material on how to do that, namely write extension for Python, so the following is just source code without much explanation. I did it with Anaconda python 3.5 with Visual C++ 2015. Other platforms probably need some adjustment:

#include <Python.h>
#include <tchar.h> 
#include <stdio.h>
#include <strsafe.h>

#include <Windows.h>
#include <Shlwapi.h>


#include "deelx.h"

#pragma comment(lib, "python35.lib")
#pragma comment(lib, "User32.lib")
#pragma comment(lib, "Shlwapi.lib")

static PyObject *get_first_matched_file_error;

static PyObject* get_first_matched_file(PyObject* self, PyObject* args)
{
WIN32_FIND_DATA ffd;
TCHAR szDir[MAX_PATH];
HANDLE hFind = INVALID_HANDLE_VALUE;
DWORD dwError = 0;

int wchars_num;
char* directoryA;
wchar_t* directoryW;
char* patternA;
wchar_t* patternW;

if (!PyArg_ParseTuple(args, "sz", &directoryA, &patternA))
return NULL;

wchars_num = MultiByteToWideChar(CP_UTF8, 0, directoryA, -1, NULL, 0);
directoryW = new wchar_t[wchars_num];
MultiByteToWideChar(CP_UTF8, 0, directoryA, -1, directoryW, wchars_num);

if (!PathFileExists(directoryW))
{
PyErr_SetString(get_first_matched_file_error, "Non-existing directory");
delete[] directoryW;
return NULL;
}

// Prepare string for use with FindFile functions.  First, copy the
// string to a buffer, then append '\*' to the directory name.

StringCchCopy(szDir, MAX_PATH, directoryW);
delete[] directoryW;
StringCchCat(szDir, MAX_PATH, TEXT("\\*"));

wchars_num = MultiByteToWideChar(CP_UTF8, 0, patternA, -1, NULL, 0);
patternW = new wchar_t[wchars_num];
MultiByteToWideChar(CP_UTF8, 0, patternA, -1, patternW, wchars_num);

CRegexpT <wchar_t> regexp(patternW);

// Find the first file in the directory.

hFind = FindFirstFile(szDir, &ffd);

if (INVALID_HANDLE_VALUE == hFind)
{
delete[] patternW;
PyErr_SetString(get_first_matched_file_error, "Cannot open directory");
return NULL;
}

PyObject * pyFileName = NULL;
// List all the files in the directory with some info about them.
do
{
if (ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
continue;
}
else
{
MatchResult result = regexp.Match(ffd.cFileName);
if (result.IsMatched())
{
char* cFileName;
int chars_num;

chars_num = WideCharToMultiByte(CP_UTF8, 0, ffd.cFileName, -1, NULL, 0, NULL, NULL);
cFileName = new char[chars_num];
WideCharToMultiByte(CP_UTF8, 0, ffd.cFileName, -1, cFileName, chars_num, NULL, NULL);

pyFileName = Py_BuildValue("s", cFileName);
delete[] cFileName;

break;
}
}
} while (FindNextFile(hFind, &ffd) != 0);

if (GetLastError() == ERROR_NO_MORE_FILES)
pyFileName = Py_BuildValue("s", "");

FindClose(hFind);
delete[] patternW;

return pyFileName;
}

static PyMethodDef get_first_matched_file_method[] = {
{
"get_first_matched_file",  get_first_matched_file,
METH_VARARGS, "Get the first file given directory and pattern"
},

{NULL, NULL, 0, NULL}        /* Sentinel */
};

static struct PyModuleDef get_first_matched_file_module =
{
PyModuleDef_HEAD_INIT,
"get_first_matched_file", /* name of module */
"Get the first file given directory and pattern",          /* module documentation, may be NULL */
-1,          /* size of per-interpreter state of the module, or -1 if the module keeps state in global variables. */
get_first_matched_file_method
};

PyMODINIT_FUNC PyInit_get_first_matched_file(void)
{
PyObject *m = PyModule_Create(&get_first_matched_file_module);
if (m == NULL)
return NULL;

get_first_matched_file_error = PyErr_NewException("get_first_matched_file.error", NULL, NULL);
Py_INCREF(get_first_matched_file_error);
PyModule_AddObject(m, "error", get_first_matched_file_error);

return m;
}


There's only one dependency, namely deelx.h, referring to the websites below:

A testing script is as follows:

import sys
sys.path.append("C:\\Users\\MS User\\Documents\\Visual Studio 2015\\Projects\\PythonExtensions\\x64\\Release")

import get_first_matched_file

directory = "C:\\tensorflow\\tensorflow\\examples\\udacity\\notMNIST_large\\B"
pattern = "\\.png$"

file = get_first_matched_file.get_first_matched_file(directory, pattern)
print(file)

Enjoy python, enjoy learning from Udacity.

Project setting:

How to change serving directory of Jupyter on Windows

Sometimes it's convenient altering the default directory which Jupyter serving from. For example, I prefer it serving from C:\tensorflow\tensorflow\examples\udacity since I git clone everything there.

First run the following command to generate the configuration file nammed jupyter_notebook_config.py, usually it resides in the .jupyter folder in your home directory:
jupyter notebook --generate-config


Now open the file and search the following line:
#c.NotebookApp.notebook_dir = ''

Uncomment it, put the target directory into the semicolon. Since on Windows platform, so we need to escape the backslash character:
c.NotebookApp.notebook_dir = 'C:\\tensorflow\\tensorflow\\examples\\udacity'

Final result:




Tuesday, December 6, 2016

How to use Anaconda

Anaconda is a one stop distribution of Python related scientific computing components. Probably its convenience is more obvious on Windows instead of Linux. Following is some summary of how to use conda on Windows platform.
To avoid anything unexpected happened, suggest to start Anaconda by choosing from start menu and launch the Anaconda Prompt.

1. Show all virtual environments created:
conda info --envs

2. Activate specific environment, like root:
activate root

3. Deactive specific environment, like root:
deactivate
There’s no need to append the option value since the command is aware of which environment it's currently in.
However, try not to deactivate the default root environment, since on *nix platform, it will try to remove Anaconda path variable from current shell environment variable. What you need to do is just switch to another virtual environment, and conda is clever enough to deactivate the previous one.

4. Create specified environment (here take “default” as an example) and initially with specified lib to be installed (here take “matplotlib” as an example):
conda create -n default matplotlib 

5. Create specified environment (for instance “default”) by clone another (here root)
conda create -n default --clone root

6. List the packages installed into specified environment (for instance “default”):
conda list -n default

7. Install package (here with option value “tensorflow”) into the current environment:
conda install tensorflow

8. Install package (as an example, “tensorflow”) into the specified environment (here with name root):
conda install -n root tensorflow

9. Search uncommon package in Anaconda website:
anaconda search -t conda package-name 

10. Show detail about found package
anaconda show user/package-name

11. Install specific package from specified channel:
conda install --channel https://conda.anaconda.org/user package-name

12. Install specific package (like tensorflow) with pip (it’s recommended to do in some virtual environment) by virtue of auto-resolving dependency:
pip instll tensorflow

Thursday, November 24, 2016

Lesson learnt from twiddling Mandlebrot Fractals with TensorFlow

I don't know it should be called a real lesson since I am recently exposed to TensorFlow. Since I guess my later work involves visualization of massive data, so I prefer things can be done on Windows (Just my preference to the old Windows API). So the first time I read the Mandelbrot example in the book "Get started with TensorFlow", I wondered could I visualize the process in Windows. So I did a slight modification of the example, as follows:

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

Y, X = np.mgrid[-1.3:1.3:0.005, -2:1:0.005] 

Z = X + 1j*Y
c = tf.constant(Z.astype(np.complex64))

zs = tf.Variable(c)
ns = tf.Variable(tf.zeros_like(c, tf.float32))

zs_square = tf.pow(zs, 2)
zs_final = tf.add(zs_square, c)

not_diverged = tf.complex_abs(zs_final) < 4

update = tf.group(zs.assign(zs_final), ns.assign_add(tf.cast(not_diverged, tf.float32)), name = "update")
output = tf.identity(ns, name="output")

saver = tf.train.Saver()

sess = tf.Session()
sess.run(tf.initialize_all_variables())

#tf.assign(zs, sess.run(zs))
#tf.assign(ns, sess.run(ns))

tf.train.write_graph(sess.graph_def, "models/", "graph.pb", as_text = True)

saver.save(sess, "models/model.ckpt")

for i in range(200):
    sess.run(update)

plt.imshow(sess.run(ns))
plt.show()

sess.close()


And it got the wonderful Mandelbrot fractal:

In order to appreciate the whole process generating the Mandelbrot fractal, I wonder possible I freeze the model, load it under Windows. For each step, I run the "update" node, then fetch the result via "output" node.

However, it's not applicable since when freezing, the variables will be substituted with constants, and it cannot be later updated again. See the following complain:


I think it's quite understandable, because the intention of TensorFlow is to let the trained model run as quickly as possible, so no surprise that variables are eliminated finally.

If we uncomment the following lines and save the graph as binary:
tf.assign(zs, sess.run(zs))
tf.assign(ns, sess.run(ns))

tf.train.write_graph(sess.graph_def, "models/", "graph.pb", as_text = False)

The fact is it seems the variable didn't get initialized if we explicitly do it in the program:


I am still working on it, but log it here in reminding someone who's applying TensorFlow to the scenario that requires no explicit input, please rethink about it.


Thursday, November 10, 2016

Handwritten digits recognition via TensorFlow based on Windows MFC (V) - Result demo

The final finised project named DigitRecognizer developed under Visual Studio 2015, referring to the following video for an demonstration.







However, still a long way to go to master TensorFlow.

Thanks for the guys at Google for developing TensorFlow, however support of bazel on Windows still need more improvements.

Thanks guys at Microsoft for developing Visual Studio, which always cease the pain for development on Windows.

Thanks guys make and still bread Machine Learning, will always need to learn from you and to show my appreciations.

Happy learning, happy coding!

Handwritten digits recognition via TensorFlow based on Windows MFC (IV) - Load trained model

I think two good article have detailed everything, thanks a lot to their efforts:
https://medium.com/jim-fleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f#.t78tjznzu, by Jim Fleming; http://jackytung8085.blogspot.kr/2016/06/loading-tensorflow-graph-with-c-api-by.html by Jacky Tung.

So I directly paste the code here for reference:

MnistModel.cc:

#include<Windows.h>

#include <stdio.h>

#include <vector>
#include <string>
#include <sstream>
#include <iostream>
#include <utility>

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"

#include "MNistComm.h"

using std::vector;
using std::string;
using std::ostringstream;
using std::endl;
using std::pair;

using namespace tensorflow;

void fillErrMsg(MNIST_COMM_ERROR *err, MNIST_ERROR_CODE c, Status& status)
{
    memset(err, 0, sizeof(MNIST_COMM_ERROR));
        
    err->err = c;
        
    ostringstream ost;
    ost << status.ToString() << endl;
        
    snprintf(err->msg, MAX_MSG_SIZ, "%s", ost.str().c_str());
}

// Windows are Unicode supportted, so everything is natively Unicode
int wmain(wchar_t* argc, wchar_t* argv[])
{    
    // Open file mapping object
    MnistShm mnistShm(false);
    if (!mnistShm)
        return MNIST_OPEN_SHM_FAILED;        
   
    MnistEvent mnistEvent(false);
    if (!mnistEvent)
        return MNIST_OPEN_EVT_FAILED;

    Session* session = NULL;
    Status status = NewSession(SessionOptions(), &session);
    if(!status.ok())
    {
        MNIST_COMM_ERROR err;
        fillErrMsg(&err, MNIST_SESSION_CREATION_FAILED, status);
        mnistShm.SetError(reinterpret_cast<char*>(&err));
        
        return MNIST_SESSION_CREATION_FAILED;
    }
        
    char modelPath[MAX_PATH];
    CMnistComm::WChar2Char(modelPath, argv[1], MAX_PATH - 1);
    
    GraphDef graph_def;    
    status = ReadBinaryProto(Env::Default(), modelPath, &graph_def);
    if (!status.ok())
    {        
        MNIST_COMM_ERROR err;
        fillErrMsg(&err, MNIST_MODEL_LOAD_FAILED, status);
        mnistShm.SetError(reinterpret_cast<char*>(&err));

        return MNIST_MODEL_LOAD_FAILED;
    }
    
    status = session->Create(graph_def);
    if (!status.ok()) {

        MNIST_COMM_ERROR err;
        fillErrMsg(&err, MNIST_GRAPH_CREATION_FAILED, status);
        mnistShm.SetError(reinterpret_cast<char*>(&err));

        return MNIST_GRAPH_CREATION_FAILED;
    }
    
    // Setup inputs and outputs:
    Tensor img(DT_FLOAT, TensorShape({1, MNIST_IMG_DIM}));

    MNIST_COMM_EVENT evt;
    
    while (evt = mnistEvent.WaitForEvent(MNIST_EVENT_PROC))
    {        
        auto buf = img.flat<float>().data();
    
        mnistShm.GetImageData(reinterpret_cast<char*>(buf));

        vector<pair<string, Tensor>> inputs = {
            { "input", img}
        };
        
        // The session will initialize the outputs
        vector<Tensor> outputs;
        // Run the session, evaluating our "logits" operation from the graph
        status = session->Run(inputs, {"recognize"}, {}, &outputs);
        if (!status.ok()) {
            MNIST_COMM_ERROR err;
            fillErrMsg(&err, MNIST_MODEL_RUN_FAILED, status);
            mnistShm.SetError(reinterpret_cast<char*>(&err));
            
            return MNIST_MODEL_RUN_FAILED;
        }
        
        auto weights = outputs[0].shaped<float, 1>({10});
        int index = 0;
        int digit = -1;
        
        float min_ = 0.0;
        for (int i = 0; i < 10; i ++, index ++)
        {
            if (weights(i) > min_)
            {
                min_ = weights(i);
                digit = index;
            }
        }
                
        mnistShm.SetImageLabel(reinterpret_cast<char*>(&digit));
        mnistEvent.NotifyReady();
                
    }

    session->Close();
    
    return 0;
}






Handwritten digits recognition via TensorFlow based on Windows MFC (III) - Inter-process Communication framework (II)

MnistComm.h:

#pragma once

#include <Windows.h>
#include <stdio.h>
#include <conio.h>
#include <tchar.h>
#include <stddef.h>

#include <map>
#include <string>

using std::map;
using std::string;

// Data structure definitions
typedef enum tagMNIST_ERROR_CODE
{
MNIST_OK = 0,
MNIST_CREATE_SHM_FAILED = 1,
MNIST_OPEN_SHM_FAILED = 2,
MNIST_CREATE_EVT_FAILED = 3,
MNIST_OPEN_EVT_FAILED = 4,

MNIST_SESSION_CREATION_FAILED = 10,
MNIST_MODEL_LOAD_FAILED = 11,
    MNIST_GRAPH_CREATION_FAILED = 12,
MNIST_MODEL_RUN_FAILED = 13,

} MNIST_ERROR_CODE;

#define MAX_MSG_SIZ 255

typedef struct tagMNIST_COMM_ERROR
{
MNIST_ERROR_CODE err;
char msg[MAX_MSG_SIZ + 1];
} MNIST_COMM_ERROR;


#define MNIST_IMG_HEIGHT 28
#define MNIST_IMG_WIDTH 28
#define MNIST_IMG_DIM (MNIST_IMG_HEIGHT * MNIST_IMG_WIDTH)
#define MNIST_IMG_SIZ (MNIST_IMG_DIM * sizeof(float))
typedef struct tagMNIST_IMG_LABEL
{
float img[MNIST_IMG_HEIGHT][MNIST_IMG_WIDTH];
int label;
} MNIST_IMG_LABEL;

typedef struct tagMNIST_COMM_SHM_LAYOUT
{
MNIST_COMM_ERROR mnist_err;
MNIST_IMG_LABEL mnist_data;
} MNIST_COMM_SHM_LAYOUT;

class MnistShm
{
public:
MnistShm(bool host);
~MnistShm();

bool operator !()
{
return !m_bInitialized;
}

bool GetError(char* pBuf)
{
CopyMemory(pBuf, m_pBuf + offsetof(MNIST_COMM_SHM_LAYOUT, mnist_err), sizeof(MNIST_COMM_ERROR));
return true;
}

bool SetError(char* pBuf)
{
CopyMemory(m_pBuf + offsetof(MNIST_COMM_SHM_LAYOUT, mnist_err), pBuf, sizeof(MNIST_COMM_ERROR));
return true;
}

bool GetImageData(char* pBuf)
{
CopyMemory(pBuf, m_pBuf + offsetof(MNIST_COMM_SHM_LAYOUT, mnist_data), MNIST_IMG_SIZ);
return true;
}

bool SetImageData(char* pBuf)
{
CopyMemory(m_pBuf + offsetof(MNIST_COMM_SHM_LAYOUT, mnist_data), pBuf, MNIST_IMG_SIZ);
return true;
}

bool GetImageLabel(char* pBuf)
{
CopyMemory(pBuf, m_pBuf + offsetof(MNIST_COMM_SHM_LAYOUT, mnist_data) + MNIST_IMG_SIZ, sizeof(int));
return true;
}

bool SetImageLabel(char* pBuf)
{
CopyMemory(m_pBuf + offsetof(MNIST_COMM_SHM_LAYOUT, mnist_data) + MNIST_IMG_SIZ,
pBuf, sizeof(int));
return true;
}


private:
static wchar_t* MnistShmName;

bool m_bInitialized;
HANDLE m_hMapFile;
char* m_pBuf;
};

typedef enum tagMNIST_COMM_EVENT
{
MNIST_EVENT_NULL = -1,
MNIST_EVENT_QUIT = 0,
MNIST_EVENT_PROC = 1,
MNIST_EVENT_REDY = 2,
MNIST_EVENT_COUNT = 3,
} MNIST_COMM_EVENT;

typedef struct tagMNIST_EVENT_NAME
{
MNIST_COMM_EVENT e;
wchar_t* name;
} MNIST_EVENT_NAME;

typedef HANDLE MNIST_EVENT_HANDLE[MNIST_EVENT_COUNT];

class MnistEvent
{
public:
MnistEvent(bool host);
~MnistEvent();

bool operator !()
{
return !m_bInitialized;
}

bool NotifyQuit()
{
return PulseEvent(m_hEvt[MNIST_EVENT_QUIT]);
}

bool NotifyProc()
{
return PulseEvent(m_hEvt[MNIST_EVENT_PROC]);
}

bool NotifyReady()
{
return PulseEvent(m_hEvt[MNIST_EVENT_REDY]);
}

MNIST_COMM_EVENT WaitForEvent(MNIST_COMM_EVENT event)
{
DWORD dwEvent;
MNIST_COMM_EVENT e;

do
{
dwEvent = WaitForMultipleObjects(MNIST_EVENT_COUNT, m_hEvt, FALSE, INFINITE);
e = static_cast<MNIST_COMM_EVENT>(dwEvent - WAIT_OBJECT_0);
if (e == event)
break;
} while (e != MNIST_EVENT_QUIT);

return e;
}

private:
static MNIST_EVENT_NAME MnistEventName[MNIST_EVENT_COUNT];

bool m_bHost;
bool m_bInitialized;
MNIST_EVENT_HANDLE m_hEvt;
};


class CMnistComm
{
public:
CMnistComm();
~CMnistComm();

static bool Char2WChar(char* cs, wchar_t* wcs, int size)
{
return swprintf(wcs, size, L"%S", cs);
}

static bool WChar2Char(char* cs, wchar_t* wcs, int size)
{
return snprintf(cs, size, "%S", wcs);
}

};



MnistComm.cpp:

#include "MnistComm.h"

wchar_t* MnistShm::MnistShmName = _T("MnistSharedMemory");

MnistShm::MnistShm(bool host) : m_bInitialized(false), m_hMapFile(NULL)
{
if (host)
m_hMapFile = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE,
0, sizeof(MNIST_COMM_SHM_LAYOUT), MnistShmName);
else
m_hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, MnistShmName);

if (m_hMapFile == NULL)
{
OutputDebugString(host ? TEXT("Could not create file mapping object") :
TEXT("Could not open file mapping object"));
return;
}

m_pBuf = reinterpret_cast<char*>(MapViewOfFile(m_hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, sizeof(MNIST_COMM_SHM_LAYOUT)));
if (m_pBuf == NULL)
{
OutputDebugString(TEXT("Could not map view of file"));
return;
}

m_bInitialized = true;
}

MnistShm::~MnistShm()
{
if (m_pBuf)
UnmapViewOfFile(m_pBuf);
if (m_hMapFile)
CloseHandle(m_hMapFile);
}

MNIST_EVENT_NAME MnistEvent::MnistEventName[] =
{
{ MNIST_EVENT_QUIT, TEXT("MnistEventQuit") },
{ MNIST_EVENT_PROC, TEXT("MnistEventProc") },
{ MNIST_EVENT_REDY, TEXT("MnistEventRedy") },
};

MnistEvent::MnistEvent(bool host) :
m_bHost(host),
m_bInitialized(false),
m_hEvt{ 0 }
{
if (host)
{
m_hEvt[MNIST_EVENT_QUIT] = CreateEvent(NULL, TRUE, FALSE, MnistEventName[MNIST_EVENT_QUIT].name);
m_hEvt[MNIST_EVENT_PROC] = CreateEvent(NULL, TRUE, FALSE, MnistEventName[MNIST_EVENT_PROC].name);
m_hEvt[MNIST_EVENT_REDY] = CreateEvent(NULL, TRUE, FALSE, MnistEventName[MNIST_EVENT_REDY].name);
}
else
{
m_hEvt[MNIST_EVENT_QUIT] = OpenEvent(EVENT_ALL_ACCESS, FALSE, MnistEventName[MNIST_EVENT_QUIT].name);
m_hEvt[MNIST_EVENT_PROC] = OpenEvent(EVENT_ALL_ACCESS, FALSE, MnistEventName[MNIST_EVENT_PROC].name);
m_hEvt[MNIST_EVENT_REDY] = OpenEvent(EVENT_ALL_ACCESS, FALSE, MnistEventName[MNIST_EVENT_REDY].name);

}

if (m_hEvt[MNIST_EVENT_QUIT] == NULL)
{
OutputDebugString(host ? TEXT("Could not create quit event object") :
TEXT("Could not open quit event object"));
return;
}

if (m_hEvt[MNIST_EVENT_PROC] == NULL)
{
OutputDebugString(host ? TEXT("Could not create processing event object") :
TEXT("Could not open processing event object"));
return;
}

if (m_hEvt[MNIST_EVENT_REDY] == NULL)
{
OutputDebugString(host ? TEXT("Could not create ready event object") :
TEXT("Could not open ready event object"));
return;
}

m_bInitialized = true;
}

MnistEvent::~MnistEvent()
{
if (m_hEvt[MNIST_EVENT_QUIT])
CloseHandle(m_hEvt[MNIST_EVENT_QUIT]);

if (m_hEvt[MNIST_EVENT_PROC])
CloseHandle(m_hEvt[MNIST_EVENT_PROC]);

if (m_hEvt[MNIST_EVENT_REDY])
CloseHandle(m_hEvt[MNIST_EVENT_REDY]);

}


CMnistComm::CMnistComm()
{
}


CMnistComm::~CMnistComm()
{
}