Skip to content
7 changes: 4 additions & 3 deletions .clang_format.hook
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
#!/bin/bash
set -e

readonly VERSION="3.8"
readonly SUPPORTED_VERSION="3.8"


version=$(clang-format -version)

if ! [[ $version == *"$VERSION"* ]]; then
if ! [[ $version == *"$SUPPORTED_VERSION"* ]]; then
echo "clang-format version check failed."
echo "a version contains '$VERSION' is needed, but get '$version'"
echo "a version contains '$SUPPORTED_VERSION' is needed, but get '$version'"
echo "you can install the right version, and make an soft-link to '\$PATH' env"
exit -1
fi
Expand Down
2 changes: 2 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[flake8]
max-line-length = 120
12 changes: 11 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,14 @@
description: Format files with ClangFormat.
entry: bash ./.clang_format.hook -i
language: system
files: \.(c|cc|cxx|cpp|cu|h|hpp|hxx|proto)$
files: \.(c|cc|cxx|cpp|cu|h|hpp|hxx)$


- repo: local
hooks:
- id: python-format-checker
name: python-format-checker
description: Format python files using PEP8 standard
entry: flake8
language: system
files: \.(py)$
8 changes: 7 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@ os:
# TODO(ChunweiYan) support osx in the future
#- osx

env:
- JOB=check_style
- JOB=test

addons:
apt:
packages:
Expand All @@ -29,12 +33,14 @@ addons:
- nodejs

before_install:
- if [[ "$JOB" == "check_style" ]]; then sudo ln -s /usr/bin/clang-format-3.8 /usr/bin/clang-format; sudo pip install pre-commit flake8; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew upgrade python; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install brew-pip; fi

script:
/bin/bash ./tests.sh all
- if [[ "$JOB" == "check_style" ]]; then ./travis/check_style.sh; fi
- if [[ "$JOB" == "test" ]]; then /bin/bash ./tests.sh all; fi

notifications:
email:
Expand Down
10 changes: 5 additions & 5 deletions demo/README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
# VisualDL demos

VisualDL supports Python and C++ based DL frameworks,
VisualDL supports Python and C++ based DL frameworks,
there are several demos for different platforms.

## PaddlePaddle
Locates in `./paddle`.

This is a visualization for `resnet` on `cifar10` dataset, we visualize the CONV parameters,
This is a visualization for `resnet` on `cifar10` dataset, we visualize the CONV parameters,
and there are some interesting patterns.

## PyTorch GAN
Locates in `./pytorch-CycleGAN-and-pix2pix`.

This submodule is forked from [pytorch-CycleGAN-and-pix2pix](
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix),
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix),
great model and the generated fake images are really funny.

This demo only works with CycleGAN mode, read [CycleGAN train doc](https://github.com/Superjomn/pytorch-CycleGAN-and-pix2pix#cyclegan-traintest) and [changes to the original code](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/compare/master...Superjomn:master) for more information.

## MxNet Mnist
Locates in `./mxnet_demo`.

By adding VisualDL as callbacks to `model.fit`,
we can use the Python SDK in MxNet,
By adding VisualDL as callbacks to `model.fit`,
we can use the Python SDK in MxNet,
but it seems that only the outside program can only retrieve parameters in epoch callbacks,
that limits the number of steps for visualization.
41 changes: 23 additions & 18 deletions demo/mxnet/mxnet_demo.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
import numpy as np
import mxnet as mx
import logging

import mxnet as mx
Expand All @@ -10,7 +8,6 @@
mnist = mx.test_utils.get_mnist()
batch_size = 100


# Provide a folder to store data for log, model, image, etc. VisualDL's visualization will be
# based on this folder.
logdir = "./tmp"
Expand Down Expand Up @@ -44,8 +41,10 @@ def _callback(param):
for name, value in name_value:
scalar0.add_record(cnt_step, value)
cnt_step += 1

return _callback


def add_image_histogram():
def _callback(iter_no, sym, arg, aux):
image0.start_sampling()
Expand All @@ -57,6 +56,7 @@ def _callback(iter_no, sym, arg, aux):
histogram0.add_record(iter_no, list(data))

image0.finish_sampling()

return _callback


Expand All @@ -65,18 +65,22 @@ def _callback(iter_no, sym, arg, aux):

logging.getLogger().setLevel(logging.DEBUG) # logging to stdout

train_iter = mx.io.NDArrayIter(mnist['train_data'], mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)
train_iter = mx.io.NDArrayIter(
mnist['train_data'], mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'],
batch_size)

data = mx.sym.var('data')
# first conv layer
conv1 = mx.sym.Convolution(data=data, kernel=(5, 5), num_filter=20)
tanh1 = mx.sym.Activation(data=conv1, act_type="tanh")
pool1 = mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2, 2), stride=(2, 2))
pool1 = mx.sym.Pooling(
data=tanh1, pool_type="max", kernel=(2, 2), stride=(2, 2))
# second conv layer
conv2 = mx.sym.Convolution(data=pool1, kernel=(5, 5), num_filter=50)
tanh2 = mx.sym.Activation(data=conv2, act_type="tanh")
pool2 = mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2, 2), stride=(2, 2))
pool2 = mx.sym.Pooling(
data=tanh2, pool_type="max", kernel=(2, 2), stride=(2, 2))
# first fullc layer
flatten = mx.sym.flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
Expand All @@ -89,21 +93,22 @@ def _callback(iter_no, sym, arg, aux):
# create a trainable module on CPU
lenet_model = mx.mod.Module(symbol=lenet, context=mx.cpu())


# train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate': 0.1},
eval_metric='acc',
# integrate our customized callback method
batch_end_callback=[add_scalar()],
epoch_end_callback=[add_image_histogram()],
num_epoch=5)
lenet_model.fit(
train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate': 0.1},
eval_metric='acc',
# integrate our customized callback method
batch_end_callback=[add_scalar()],
epoch_end_callback=[add_image_histogram()],
num_epoch=5)

test_iter = mx.io.NDArrayIter(mnist['test_data'], None, batch_size)
prob = lenet_model.predict(test_iter)
test_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)
test_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'],
batch_size)

# predict accuracy for lenet
acc = mx.metric.Accuracy()
Expand Down
26 changes: 16 additions & 10 deletions demo/paddle/cifar10_image_classification_vgg.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,8 +117,11 @@ def conv_block(input, num_filter, groups, dropouts):
else:
raise ValueError("%s network is not supported" % net_type)

predict = fluid.layers.fc(input=net, size=classdim, act='softmax',
param_attr=ParamAttr(name="param1", initializer=NormalInitializer()))
predict = fluid.layers.fc(
input=net,
size=classdim,
act='softmax',
param_attr=ParamAttr(name="param1", initializer=NormalInitializer()))
cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(x=cost)

Expand All @@ -131,8 +134,7 @@ def conv_block(input, num_filter, groups, dropouts):
PASS_NUM = 1

train_reader = paddle.batch(
paddle.reader.shuffle(
paddle.dataset.cifar.train10(), buf_size=128 * 10),
paddle.reader.shuffle(paddle.dataset.cifar.train10(), buf_size=128 * 10),
batch_size=BATCH_SIZE)

place = fluid.CPUPlace()
Expand All @@ -150,9 +152,10 @@ def conv_block(input, num_filter, groups, dropouts):
for pass_id in range(PASS_NUM):
accuracy.reset(exe)
for data in train_reader():
loss, conv1_out, param1, acc = exe.run(fluid.default_main_program(),
feed=feeder.feed(data),
fetch_list=[avg_cost, conv1, param1_var] + accuracy.metrics)
loss, conv1_out, param1, acc = exe.run(
fluid.default_main_program(),
feed=feeder.feed(data),
fetch_list=[avg_cost, conv1, param1_var] + accuracy.metrics)
pass_acc = accuracy.eval(exe)

if sample_num == 0:
Expand All @@ -165,11 +168,14 @@ def conv_block(input, num_filter, groups, dropouts):
idx = idx1
if idx != -1:
image_data = data[0][0]
input_image_data = np.transpose(image_data.reshape(data_shape), axes=[1, 2, 0])
input_image.set_sample(idx, input_image_data.shape, input_image_data.flatten())
input_image_data = np.transpose(
image_data.reshape(data_shape), axes=[1, 2, 0])
input_image.set_sample(idx, input_image_data.shape,
input_image_data.flatten())

conv_image_data = conv1_out[0][0]
conv_image.set_sample(idx, conv_image_data.shape, conv_image_data.flatten())
conv_image.set_sample(idx, conv_image_data.shape,
conv_image_data.flatten())

sample_num += 1
if sample_num % num_samples == 0:
Expand Down
19 changes: 8 additions & 11 deletions demo/vdl_scratch.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,9 @@
#!/user/bin/env python
import math
import os
import random
import subprocess


import numpy as np
from PIL import Image
from scipy.stats import norm
from visualdl import ROOT, LogWriter
from visualdl.server.log import logger as log

Expand Down Expand Up @@ -44,10 +40,10 @@

for step in range(1, 50):
histogram0.add_record(step,
np.random.normal(
0.1 + step * 0.003,
200. / (120 + step),
size=1000))
np.random.normal(
0.1 + step * 0.003,
200. / (120 + step),
size=1000))
# create image
with logw.mode("train") as logger:
image = logger.image("scratch/dog", 4) # randomly sample 4 images one pass
Expand All @@ -70,11 +66,10 @@

# a more efficient way to sample images
# check whether this image will be taken by reservoir sampling
idx = image.is_sample_taken()
idx = image.is_sample_taken()
if idx >= 0:
data = np.array(
dog_jpg.crop((left_x, left_y, right_x,
right_y))).flatten()
dog_jpg.crop((left_x, left_y, right_x, right_y))).flatten()
# add this image to log
image.set_sample(idx, target_shape, data)
# you can also just write followig codes, it is more clear, but need to
Expand All @@ -95,6 +90,7 @@
image0.add_sample(shape, list(data))
image0.finish_sampling()


def download_graph_image():
'''
This is a scratch demo, it do not generate a ONNX proto, but just download an image
Expand All @@ -110,4 +106,5 @@ def download_graph_image():
f.write(graph_image)
log.warning('graph ready!')


download_graph_image()
12 changes: 6 additions & 6 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Most of the DNN platforms are using Python. VisualDL supports Python out of the
By just adding a few lines of configuration to the code, VisualDL can provide a rich visual support for the training process.

In addition to Python SDK, the underlying VisualDL is written in C++, and its exposed C++ SDK can be integrated into other platforms.
Users can access the original features and monitor customized matrix.
Users can access the original features and monitor customized matrix.

## Components
VisualDL supports four componments:
Expand All @@ -27,7 +27,7 @@ Compatible with ONNX (Open Neural Network Exchange) [https://github.com/onnx/onn
</p>

### scalar
Show the error trend throughout the training.
Show the error trend throughout the training.

<p align="center">
<img src="./introduction/scalar.png" width="60%"/>
Expand Down Expand Up @@ -64,7 +64,7 @@ logger = LogWriter(dir, sync_cycle=10)
with logger.mode("train"):
# create a scalar component called 'scalars/scalar0'
scalar0 = logger.scalar("scalars/scalar0")


# add some records during DL model running, lets start from another block.
with logger.mode("train"):
Expand All @@ -86,12 +86,12 @@ namespace cp = visualdl::components;
int main() {
const std::string dir = "./tmp";
vs::LogWriter logger(dir, 10);

logger.SetMode("train");
auto tablet = logger.AddTablet("scalars/scalar0");

cp::Scalar<float> scalar0(tablet);

for (int step = 0; step < 1000; step++) {
float v = (float)std::rand() / RAND_MAX;
scalar0.AddRecord(step, v);
Expand Down
5 changes: 1 addition & 4 deletions docs/graph_data_format.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Facebook has an open-source project called [ONNX](http://onnx.ai/)(Open Neural N
## IR of ONNX
The description of ONNX IR can be found [here](https://github.com/onnx/onnx/blob/master/docs/IR.md). The most important part is the definition of [Graph](https://github.com/onnx/onnx/blob/master/docs/IR.md#graphs).

Each computation data flow graph is structured as a list of nodes that form the graph. Each node is called an operator. Nodes have zero or more inputs, one or more outputs, and zero or more attribute-value pairs.
Each computation data flow graph is structured as a list of nodes that form the graph. Each node is called an operator. Nodes have zero or more inputs, one or more outputs, and zero or more attribute-value pairs.

## Rest API data format
Frontend uses rest API to get data from the server. The data format will be JSON. The data structure of a Graph is as below. Each Graph has three vectors:
Expand Down Expand Up @@ -112,6 +112,3 @@ Frontend uses rest API to get data from the server. The data format will be JSON
]
}
```



Loading