Skip to content

Commit d35865b

Browse files
committed
ACPYPE API Reference notebook
1 parent 51dd52c commit d35865b

File tree

1 file changed

+342
-0
lines changed

1 file changed

+342
-0
lines changed

ACPype_API_Reference.ipynb

Lines changed: 342 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,342 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"private_outputs": true,
7+
"provenance": [],
8+
"collapsed_sections": [
9+
"42Dx-W2PhNvX",
10+
"sZmMu11IhXYR",
11+
"MuQBcdstMOQA",
12+
"aU_uwwgye4qJ"
13+
],
14+
"authorship_tag": "ABX9TyPB+Hq+CQfyJnK2Zx9oFMHz",
15+
"include_colab_link": true
16+
},
17+
"kernelspec": {
18+
"name": "python3",
19+
"display_name": "Python 3"
20+
},
21+
"language_info": {
22+
"name": "python"
23+
}
24+
},
25+
"cells": [
26+
{
27+
"cell_type": "markdown",
28+
"metadata": {
29+
"id": "view-in-github",
30+
"colab_type": "text"
31+
},
32+
"source": [
33+
"<a href=\"https://colab.research.google.com/github/Bio2Byte/public_notebooks/blob/main/ACPype_API_Reference.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
34+
]
35+
},
36+
{
37+
"cell_type": "markdown",
38+
"source": [
39+
"# ACPYPE API Reference\n",
40+
"\n",
41+
"[ACPYPE](https://bio2byte.be/acpype/) provides a user-friendly HTTP API that allows users to submit molecules in a variety of formats for processing using the ACPYPE software.\n",
42+
"\n",
43+
"Our API enables users to specify a wide range of options for processing, including charge method, net charge, and atom type.\n",
44+
"\n",
45+
"After submitting their molecule, users can query the result files generated by ACPYPE in the background using a simple hash ID.\n",
46+
"\n",
47+
"![https://bio2byte.be/static_tools/images/workflow_acpype.jpg](https://bio2byte.be/static_tools/images/workflow_acpype.jpg)\n",
48+
"\n",
49+
"## About ACPYPE\n",
50+
"\n",
51+
"The ACPYPE Portal was designed to generate topology parameters files for unusual organic chemical compounds. It is based on ANTECHAMBER and so far, ACPYPE is able to work for CNS/XPLOR, GROMACS, CHARMM and AMBER.\n",
52+
"\n",
53+
"The main scope for ACPYPE Server is to help to pave the way for automatic molecular dynamics simulations involving molecules with unknown parameters, like for example, in complexes of protein and inhibitor, where the ligand is usually an unusual chemical compound.\n",
54+
"\n",
55+
"ACPYPE stands for AnteChamber PYthon Parser interfacE and is pronounced \"ace + pipe\".\n",
56+
"\n",
57+
"### What the ACPYPE Server does\n",
58+
"It will take either a SMILES input or files in format PDB, MDL or MOL2 of a small organic molecule without open valence and assign charges and force field parameterizations according to GAFF (Generalised Amber Force Field). There are some options that can be applied to a submitting project.\n",
59+
"For charge, there are three methods:\n",
60+
"\n",
61+
"bcc: for semi-epirical AM1-BCC, parameterized to reproduce HF/6- 31G* RESP charges; slow, but with good cost/benefit (default)\n",
62+
"Gasteiger method, very fast but less accurate\n",
63+
"user: for a MOL2 file with charges already calculated, via, e.g., R.E.D.-III\n",
64+
"For net charge, set a integer value for the project, or let ACPYPE guess the net charge for the molecule.\n",
65+
"For atom type, set GAFF (default), GAFF2 or AMBER. If set AMBER, ACPYPE/antechamber will try to set parameters and atom types according to AMBER14SB forcefield. Case it fails, GAFF parameters (but with AMBER atom types) will be used.\n",
66+
"\n",
67+
"### What the ACPYPE Server doesn't do\n",
68+
"It will not work with organic molecule with open valences; containing others atoms than C, N, O, S, P, H, F, Cl, Br and I; or covalently bonded to another molecule. If one wants parameters for a modified amino acid residue, one way of getting it is by neutralising the N- and C- termini and then fit manually the additional parameters to the modified residue.\n",
69+
"\n",
70+
"### Citations\n",
71+
"If you use this resource, please cite:\n",
72+
"\n",
73+
"> SOUSA DA SILVA, A. W. & VRANKEN, W. F. ACPYPE - AnteChamber PYthon Parser interfacE. BMC Research Notes 2012, 5:367\n",
74+
"KAGAMI, L. P., SOUSA DA SILVA, A. W., DÍAZ, A., & VRANKEN, W. F. The ACPYPE web server for small molecule MD topology generation. Manuscript submitted.\n",
75+
"If you use non-uniform 1-4 scale factor conversion (e.g. if using GLYCAM06), please cite:\n",
76+
"\n",
77+
"> BERNARDI, A., FALLER, R., REITH, D., and KIRSCHNER, K. N. ACPYPE update for nonuniform 1-4 scale factors: Conversion of the GLYCAM06 force field from AMBER to GROMACS. SoftwareX 10 (2019), 100241.\n",
78+
"\n",
79+
"### Disclaimer\n",
80+
"This service is provided with ABSOLUTELY NO WARRANTY and holds no liabilities. If you decide to use it, bear in mind that your data and potential results are not stored with encryption, and the service administration has access to it. Furthermore, the data and results will be eventually removed after two weeks from the time of submission, whether your job has finished or not."
81+
],
82+
"metadata": {
83+
"id": "Sd18WMA-LMw6"
84+
}
85+
},
86+
{
87+
"cell_type": "markdown",
88+
"source": [
89+
"## Submitting a ligand\n",
90+
"Depending on your input, please execute one of the two available scenarios."
91+
],
92+
"metadata": {
93+
"id": "URKKUNUuLWlq"
94+
}
95+
},
96+
{
97+
"cell_type": "code",
98+
"execution_count": null,
99+
"metadata": {
100+
"id": "A4XSVcRrLMHv"
101+
},
102+
"outputs": [],
103+
"source": [
104+
"# Import the necessary libraries\n",
105+
"import json\n",
106+
"import requests"
107+
]
108+
},
109+
{
110+
"cell_type": "code",
111+
"source": [
112+
"location = None\n",
113+
"hash_id = None"
114+
],
115+
"metadata": {
116+
"id": "8ZCpOe2ndTDf"
117+
},
118+
"execution_count": null,
119+
"outputs": []
120+
},
121+
{
122+
"cell_type": "markdown",
123+
"source": [
124+
"### Scenario A: Using SMILES representation"
125+
],
126+
"metadata": {
127+
"id": "42Dx-W2PhNvX"
128+
}
129+
},
130+
{
131+
"cell_type": "code",
132+
"source": [
133+
"url = \"https://bio2byte.be/acs/api\"\n",
134+
"print(\"POST\", url)\n",
135+
"\n",
136+
"# Define request payload\n",
137+
"payload = {\n",
138+
" \"inputFile\": None,\n",
139+
" \"file_name\": \"OXYGEN_FROM_API\",\n",
140+
" \"token\": \"RTV5GZHc5M\",\n",
141+
" \"charge_method\": \"bbc\",\n",
142+
" \"net_charge\": None,\n",
143+
" \"atom_type\": \"gaff\",\n",
144+
" \"email\": None,\n",
145+
" \"smiles\": \"O\"\n",
146+
"}\n",
147+
"\n",
148+
"# Send POST request\n",
149+
"response = requests.post(url, json=payload)\n",
150+
"\n",
151+
"# Process the response\n",
152+
"if response.status_code > 200 and response.status_code < 300:\n",
153+
" data = response.json()\n",
154+
" location = data[\"Location\"]\n",
155+
" hash_id = data[\"hash_id\"]\n",
156+
" message = data[\"message\"]\n",
157+
" print(\"Prediction job submitted successfully!\")\n",
158+
" print(f\"Location: {location}\")\n",
159+
" print(f\"hash_id: {hash_id}\")\n",
160+
" print(f\"Message: {message}\")\n",
161+
"else:\n",
162+
" print(\"Failed to submit prediction job. Status Code:\", response.status_code)"
163+
],
164+
"metadata": {
165+
"id": "DXTpdk-qLVah"
166+
},
167+
"execution_count": null,
168+
"outputs": []
169+
},
170+
{
171+
"cell_type": "markdown",
172+
"source": [
173+
"### Scenario B: Using a file with coordinates\n",
174+
"Please upload to Google Colab your input file (PDB, MDL or MOL2 of a small organic molecule)"
175+
],
176+
"metadata": {
177+
"id": "sZmMu11IhXYR"
178+
}
179+
},
180+
{
181+
"cell_type": "code",
182+
"source": [
183+
"url = \"https://bio2byte.be/acs/api\"\n",
184+
"print(\"POST\", url)\n",
185+
"\n",
186+
"with open(\"/content/EXAMPLE.pdb\", \"rb\") as input_file_handler:\n",
187+
" file_content = input_file_handler.read().decode(\"utf-8\")\n",
188+
"\n",
189+
"# Define request payload\n",
190+
"payload = {\n",
191+
" \"inputFile\": file_content,\n",
192+
" \"file_name\": \"EXAMPLE.pdb\",\n",
193+
" \"token\": \"RTV5GZHc5M\",\n",
194+
" \"charge_method\": \"bbc\",\n",
195+
" \"net_charge\": None,\n",
196+
" \"atom_type\": \"gaff\",\n",
197+
" \"email\": None,\n",
198+
" \"smiles\": None\n",
199+
"}\n",
200+
"\n",
201+
"# Send POST request\n",
202+
"response = requests.post(url, json=payload)\n",
203+
"\n",
204+
"# Process the response\n",
205+
"if response.status_code > 200 and response.status_code < 300:\n",
206+
" data = response.json()\n",
207+
" location = data[\"Location\"]\n",
208+
" hash_id = data[\"hash_id\"]\n",
209+
" message = data[\"message\"]\n",
210+
" print(\"Prediction job submitted successfully!\")\n",
211+
" print(f\"Location: {location}\")\n",
212+
" print(f\"hash_id: {hash_id}\")\n",
213+
" print(f\"Message: {message}\")\n",
214+
"else:\n",
215+
" print(\"Failed to submit prediction job. Status Code:\", response.status_code)"
216+
],
217+
"metadata": {
218+
"id": "IOB_sR1phXHR"
219+
},
220+
"execution_count": null,
221+
"outputs": []
222+
},
223+
{
224+
"cell_type": "markdown",
225+
"source": [
226+
"## Querying the submission status via Hash ID"
227+
],
228+
"metadata": {
229+
"id": "MuQBcdstMOQA"
230+
}
231+
},
232+
{
233+
"cell_type": "code",
234+
"source": [
235+
"url = f\"https://bio2byte.be/acs{location}\"\n",
236+
"print(\"GET\", url)\n",
237+
"\n",
238+
"# Make the API request and store the JSON response in a variable\n",
239+
"response = requests.get(url)\n",
240+
"\n",
241+
"# Check the response status code\n",
242+
"if response.status_code >= 400:\n",
243+
" print(\"Failed to fetch prediction job. Response status Code:\", response.status_code)\n",
244+
"elif response.status_code >= 202:\n",
245+
" print(\"Still processing your request. Please try again in a minute. Response status code: \", response.status_code)\n",
246+
"\n",
247+
" # Extract the JSON response\n",
248+
" json_response = response.json()\n",
249+
" print(json_response)\n",
250+
"else:\n",
251+
" print(\"Response status code:\", response.status_code)\n",
252+
" # Extract the JSON response\n",
253+
" json_response = response.json()\n",
254+
"\n",
255+
" id = json_response['id']\n",
256+
" creation_date = json_response['creation_date']\n",
257+
" token = json_response['token']\n",
258+
" hash_id = json_response['hash_id']\n",
259+
" result_request = json_response['result_request']\n",
260+
" location = result_request['location']\n",
261+
" status = result_request['status']\n",
262+
"\n",
263+
" print(\"Response fields:\")\n",
264+
" print(f\"id: {id}\")\n",
265+
" print(f\"creation_date: {creation_date}\")\n",
266+
" print(f\"token: {token}\")\n",
267+
" print(f\"hash_id: {hash_id}\")\n",
268+
" print(f\"location: {location}\")\n",
269+
" print(f\"status: {status}\")\n",
270+
" \n",
271+
" print(\"You job files are available!\")"
272+
],
273+
"metadata": {
274+
"id": "B1vKIdBeMNw0"
275+
},
276+
"execution_count": null,
277+
"outputs": []
278+
},
279+
{
280+
"cell_type": "markdown",
281+
"source": [
282+
"## Fetching the submission results via Hash ID"
283+
],
284+
"metadata": {
285+
"id": "aU_uwwgye4qJ"
286+
}
287+
},
288+
{
289+
"cell_type": "code",
290+
"source": [
291+
"# Make the API request and store the JSON response in a variable\n",
292+
"url = f\"https://bio2byte.be/acs/api/{hash_id}/\"\n",
293+
"print(\"GET\", url)\n",
294+
"\n",
295+
"response = requests.get(url)\n",
296+
"\n",
297+
"# Check the response status code\n",
298+
"if response.status_code >= 400:\n",
299+
" # Display an error message if the request fails\n",
300+
" print(\"Failed to fetch prediction job. Status Code:\", response.status_code)\n",
301+
"elif response.status_code >= 300:\n",
302+
" print(\"Still working on the prediction job, please try again in a minute. Status Code:\", response.status_code)\n",
303+
"else:\n",
304+
" # Extract the JSON response\n",
305+
" json_response = response.json()\n",
306+
"\n",
307+
" # Loop through each result and save it to a separate file\n",
308+
" for i, result in enumerate(json_response[\"results\"], start=1):\n",
309+
" for key, value in result.items():\n",
310+
" \n",
311+
" # Save each result to a file\n",
312+
" print(f\"Saving {key} content to result_{i}_{key}.txt\")\n",
313+
" \n",
314+
" with open(f\"result_{i}_{key}.txt\", \"w\") as f:\n",
315+
" f.write(value)\n",
316+
" \n",
317+
" print(\"Files saved with success\")"
318+
],
319+
"metadata": {
320+
"id": "Q9N7GnJIbhaO"
321+
},
322+
"execution_count": null,
323+
"outputs": []
324+
},
325+
{
326+
"cell_type": "code",
327+
"source": [
328+
"# Compress the result files\n",
329+
"!zip -r acpype_results.zip /content/*.txt\n",
330+
"\n",
331+
"# Download results\n",
332+
"from google.colab import files\n",
333+
"files.download(\"/content/acpype_results.zip\")"
334+
],
335+
"metadata": {
336+
"id": "kSmDEJkPfyRR"
337+
},
338+
"execution_count": null,
339+
"outputs": []
340+
}
341+
]
342+
}

0 commit comments

Comments
 (0)