Skip to content

Conversation

@Conxz
Copy link
Contributor

@Conxz Conxz commented Apr 9, 2011

Thank Michael firstly.
Separated Michael's sharing code(for TBSS) git://gist.github.com/905015.git into 4 workflows.
Added slicer node into tbss1.
Created a tbss_all workflow by combining 4 workflows.
Added test_tbss.py for testing the workflow.

'to find best target from all images in FA' has not yet been included, but it can satisfy common research process.

Kindly give me your comments.

@mwaskom
Copy link
Member

mwaskom commented Apr 10, 2011

Hey, I haven't had much time to look too in-depth at this, but looks like a great start.

One small suggestions, perhaps change a few of the ImageMaths nodes to use interfaces from the fsl.maths module? (They import in the top level fsl namespace). That might make some of the steps clearer for users who read the workflow to understand what is happening.

Will try and take a closer look in the next couple of days.

@Conxz
Copy link
Contributor Author

Conxz commented Apr 11, 2011

I will have a try. Thanks.

@chrisgorgo
Copy link
Member

This looks really good! Thanks for your contribution. However, to make it really useful for broader audience we need to do two things:

  1. Create a tutorial - such as the fsl_dti_tutorial. Essentially this is what you already have in the tbss_test file. Maybe with addition of randomize at the end. The data that that the tutorial will be created for need to be available publicly so everyone could use it. If you are willing to donate some anonymized data, we are happy to add it to our example data set.
  2. Test that will compare TBSS pipeline with FSL scripts. Because we are recreating to certain extent what is done in FSL, we need an automated way of checking if we are producing the same results. This involves running your workflow and FSL scripts on the same dataset and comparing results. Something similar has been done for bedpostx and eddy_correct: https://github.com/nipy/nipype/blob/master/nipype/workflows/fsl/tests/test_dti.py
@Conxz
Copy link
Contributor Author

Conxz commented Apr 15, 2011

Hi~
I have done some thing about the tbss worflow these days according your suggestion.

  1. I am not sure whether fsl.dti.Randomise() is stable to use. So I have made a Node via fsl.MultipleRegressDesign() and fsl.FLAMEO() to do what I want. (have not committed)
  2. About the data, as we can see, in the path fsl_course_data/tbss, there are 6 FA files available for us all.
  3. About TEST: I have made a file(test_tbss.py) from the one you did. The process is used to compare results from TBSS pipeline with those from FSL scripts directly. BUT I am not sure how I can test it. Maybe using nosetests? Is it convenient to give me some tips or a webpage describing the testing steps? Thanks.
@chrisgorgo
Copy link
Member

  1. fsl.dti.Randomise() definitely needs some work (not much, but someone has to do it). I also don't think you can use parametric test on FA data (due to broken normality assumption).

  2. Yes - that is true although I recall that Michael had some comments about this dataset. See: TBSS Components #95 (comment) . Also for testing you should follow the pattern I have used for dti workflows test - defining where the fsl course data set is using a env variable instead of hardcoding it.

    @skipif(no_fsl_course_data)
    def test_create_eddy_correct_pipeline():
    fsl_course_dir = os.environ["FSL_COURSE_DATA"]

  3. to run a test
    nosetests nipype/workflows/fsl/test_tbss.py --verbose

The test you have written is a step in the right direction, but I think we should be testing something else. At the moment you are testing if our implementation of TBSS gives the same results as a snapshot of FSL TBSS. However what we are really interested is to automatically find when FSL will change something in their procedure. Therefore what we should do is to run TBSS twice - once using original FSL scripts and once using your workflow.
How to do it in practice is an open question - you don't have to use the old TBSS interfaces but just run the scripts directly if you prefer.

@mwaskom
Copy link
Member

mwaskom commented Apr 15, 2011

I have a branch where I'm working on sprucing up the randomise interface. You should definitely not use flameo with TBSS data. I'll try to get it up and running in the next few days.

It's worth pointing out again that, although we should have Randomise fully operational if we're going to claim to support TBSS, it's not actually a component of the FSL tbss scripts.

@Conxz
Copy link
Contributor Author

Conxz commented Apr 17, 2011

About TEST:
I ran test_tbss on the data from fsl_course_data/tbss and found that there is a mismatch 0.00165488811018% between result all_FA_skeletonised.nii.gz from tbss workflow and that from TBSS scripts of FSL. I wonder if this is acceptable.(I do not think so.)
Of course, I will try to find when the difference happens.
And I will run the test on my data later.

Abount Randomise:
I glad to hear that Michael is working on it now, and I will try my best to test and vertify the tbss workflow.

@mwaskom
Copy link
Member

mwaskom commented Apr 17, 2011

If I remember correctly, when I wrote that workflow, there's a difference at the skelotonrtzaition stage. FSL derives a brainmask from the data, whereas my workflow just used the MNI-space brainmask. I believe that when I changed the mask node to the way FSL does it, I got data that were identical to FSLs. But it's possible I did that after posting my gist with the TBSS workflow? I'd check into that first. Sorrry for not remembering more precisely.

@chrisgorgo
Copy link
Member

It is probably better to test the itermediate instead of just the final output anyway.

@chrisgorgo
Copy link
Member

As @satra kindly pointed out to me you can use MultipleRegression interface to create design and contrast files for Randomise for final step of TBSS analysis.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps in the docstrings, we should just say what happens, instead of asserting that it's the same as in the FSL scripts (which is not always transparent...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean that we use an underlying function to take the place?
But I think we just use the tbss_all, tbss1~4 at most, when we use the workflow. And what the user need to know are the inputs and outputs for the tbss_all and individual workflows.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I mean, describe the actual data processing that's happening. e.g, for tbss_1_preproc "erodes brain FA images and creates brain masks", etc.

@Conxz
Copy link
Contributor Author

Conxz commented Apr 21, 2011

To @chrisfilo :
Do you mean I can combine MultipleRegression and Randomise as the next analysis of tbss?
However, as is said in nipype, the interface Randomise is UNSTABLE to use.
You mean combining can work well now?

@chrisgorgo
Copy link
Member

Yes - I would give it a try. If it's broken we'll fix it.

@Conxz
Copy link
Contributor Author

Conxz commented Apr 21, 2011

I ran the test but it cost lots of time.
It is still running now while I submitted it about 4 hours ago.
I am not sure whether it works now.

2011-04-21

Xiangzhen Kong

发件人: chrisfilo
发送时间: 2011-04-21 14:47:06
收件人: bnucon
抄送:
主题: Re: [GitHub] 4-steps TBSS [nipy/nipype GH-152]

Yes - I would give it a try. If it's broken we'll fix it.

Reply to this email directly or view it on GitHub:
#152 (comment)

@mwaskom
Copy link
Member

mwaskom commented Apr 21, 2011

The big problem with Randomise is that it has no outputs. I'm working on that, but it's a slow procedure to explore the relationship between command line settings and outputs that get generated since it takes a while to run (and seems not to want to run properly below some number of permutations) :)

@mwaskom
Copy link
Member

mwaskom commented Apr 25, 2011

Hey,
I just pushed some work where I added outputs to randomise: mwaskom@20c3ded

Unfortunately, I'm extremely bandwith-limited at the moment and don't have the time to fully test these. Xiangzhen, perhaps you could take a look?

@Conxz
Copy link
Contributor Author

Conxz commented May 1, 2011

These days I have been preparing a report and I have not taken a look at it.
Now I will try it. Thanks very much.

@Conxz
Copy link
Contributor Author

Conxz commented May 2, 2011

Hi,
Since I get both positive and negative stats-values in the *_tstat1.nii.gz after I run randomise, do you know whether the (un)corrected p-value maps are for positive relationship or both?
Thank!

Best,
Xiangzhen

@mwaskom
Copy link
Member

mwaskom commented May 2, 2011

I don't know off the top of my head, but I would assume so. I suppose the easiest thing to do would be to take some T stat values of voxels, convert them to p values manually using one or two tails, and then check whether that matches the p values in the voxel you took took the tstat from.

@Conxz
Copy link
Contributor Author

Conxz commented May 4, 2011

Hi, Michael
I have gotten a reply saying "The testing (and t->p conversion) is one-tailed."
thanks very much.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test_tbss workflow works well for all comparisons about two single files, but file lists.
I wonder whether I can use a MapNode here,
Should I merge the file list into one before comparing outputs?

@satra
Copy link
Member

satra commented May 17, 2011

what's the status of this pull request?

@Conxz
Copy link
Contributor Author

Conxz commented May 31, 2011

So far, I have tested the tbss.py via test_tbss.py.
Now the script works well for me.
However, now I still use the cmd 'randomise/randomise_parallel)' from fsl and have not try Michael's work for randomise.

Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is always a warning as following:
/usr/local/neurosoft/lib/python2.6/site-packages/nipype/interfaces/base.py:363: UserWarning: Input project_data requires inputs: threshold, distance_map, data_file
warn(msg)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it cannot know ahead of time that you will set one of those values, so it
simply warns the user. if at runtime you don't set one of those values, you
will get an error.

@satra
Copy link
Member

satra commented Dec 23, 2011

@bnucon are you still working on this or do you have the time to update this and make sure it works with the latest nipype?

@Conxz
Copy link
Contributor Author

Conxz commented Jan 11, 2012

@satra
Yes, I will test this and update it this two days.

@satra
Copy link
Member

satra commented Jan 11, 2012

awesome thanks!

@Conxz
Copy link
Contributor Author

Conxz commented Jan 15, 2012

@satra
Sorry, I still have not made some time to finish the update.
As you may know, during the Spring Festival Days in China, buying a ticket is such a big problem. I have to go back home tomorrow morning and will get back around 2 February. During these about two weeks I may have no access to the internet.
Sorry again.

@chrisgorgo
Copy link
Member

Is the branch ready for another review?

Thanks,
Chris

@Conxz
Copy link
Contributor Author

Conxz commented Feb 7, 2012

Yes.
I have tested it on my data.
Any suggestions, comments or even criticisms are welcome.

Best,
Xiangzhen

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this also belongs to /examples

@Conxz Conxz merged commit f3aa807 into nipy:master Feb 16, 2012
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants