Skip to content

Commit e0b87e7

Browse files
committed
derving study name from lowercase storage collection
1 parent 5e83001 commit e0b87e7

File tree

4 files changed

+6
-15
lines changed

4 files changed

+6
-15
lines changed

docs/config.md

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,6 @@ ANONYMIZE_PIXELS=False
3939

4040
**Important** the pixel scrubbing is not yet implemented, so this variable will currently only check for the header, and alert you of the image, and skip it. Regardless of the setting that you choose for the variable `ANONYMIZE_PIXELS` the header will always be checked. If you have pixel scrubbing turned on (and it's implemented) the images will be scrubbed, and included. If you have scrubbing turned on (and it's not implemented) it will just yell at you and skip them. The same thing will happen if it's off, just to alert you that they exist.
4141

42-
```
43-
# The default study to use
44-
SOM_STUDY="test"
45-
```
46-
47-
The `SOM_STUDY` is part of the Stanford DASHER API to specify a study, and the default should be set before you start the application. If the study needs to vary between calls, please [post an issue](https://www.github.com/pydicom/sendit) and it can be added to be done at runtime.
48-
4942
Next, you likely want a custom filter applied to whitelist (accept no matter what), greylist (not accept, but in the future know how to clean the data) and blacklist (not accept). Currently, the deid software applies a [default filter](https://github.com/pydicom/deid/blob/development/deid/data/deid.dicom) to filter out images with known burned in pixels. If you want to add a custom file, currently it must live with the repository, and is referenced by the name of the file after the `deid`. You can specify this string in the config file:
5043

5144
```
@@ -85,7 +78,7 @@ GOOGLE_STORAGE_COLLECTION=None # define here or in your secrets
8578
GOOGLE_PROJECT_NAME="project-name" # not the id, usually the end of the url in Google Cloud
8679
```
8780

88-
Note that the storage collection is set to None, and this should be the id of the study (eg, the IRB). For Google Storage, this collection corresponds with a Bucket. For BigQuery, it corresponds with a database (and a table of dicom). If this is set to None, it will not upload.
81+
Note that the storage collection is set to None, and this should be the id of the study (eg, the IRB). For Google Storage, this collection corresponds with a Bucket. For BigQuery, it corresponds with a database (and a table of dicom). If this is set to None, it will not upload. Also note that we derive the study name to use with Dasher from this bucket. It's simply the lowercase version of it. This means that a `GOOGLE_STORAGE_COLLECTION` of `IRB12345` maps to a study name `irb12345`.
8982

9083
Note that this approach isn't suited for having more than one study - when that is the case, the study will likely be registered with the batch. Importantly, for the above, there must be a `GOOGLE_APPLICATION_CREDENTIALS` filepath exported in the environment, or it should be run on a Google Cloud Instance (unlikely in the near future).
9184

sendit/apps/main/tasks/finish.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,6 @@ def upload_storage(batch_ids=None):
7979
from sendit.settings import (GOOGLE_CLOUD_STORAGE,
8080
SEND_TO_GOOGLE,
8181
GOOGLE_PROJECT_NAME,
82-
GOOGLE_PROJECT_ID_HEADER,
8382
GOOGLE_STORAGE_COLLECTION)
8483

8584
if batch_ids is None:

sendit/apps/main/tasks/get.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@ def get_identifiers(bid,study=None,run_replace_identifiers=True):
244244

245245
# Process all dicoms at once, one call to the API
246246
dicom_files = batch.get_image_paths()
247-
batch.change_images_status('PROCESSING')
247+
batch.status = "PROCESSING"
248248
batch.save() # redundant
249249

250250
try:

sendit/settings/config.py

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,6 @@
1515
# If True, scrub pixel data for images identified by header "Burned in Annotation" = "NO"
1616
ANONYMIZE_PIXELS=False # currently not supported
1717

18-
# The default study to use
19-
SOM_STUDY="test"
20-
2118
# An additional specification for white, black, and greylisting data
2219
# If None, only the default (for burned pixel filtering) is used
2320
# Currently, these live with the deid software, eg:
@@ -53,5 +50,7 @@
5350

5451
# Google Cloud Storage Bucket (must be created)
5552
GOOGLE_CLOUD_STORAGE='radiology'
56-
GOOGLE_STORAGE_COLLECTION=None # define here or in your secrets
57-
GOOGLE_PROJECT_NAME=None # define here or in your secretsy
53+
GOOGLE_STORAGE_COLLECTION='' # must be defined before SOM_STUDY
54+
GOOGLE_PROJECT_NAME=None
55+
56+
SOM_STUDY = GOOGLE_STORAGE_COLLECTION.lower()

0 commit comments

Comments
 (0)