Merge lp:~patrick-hetu/charms/precise/python-django/pure-python into lp:charms/python-django
- Precise Pangolin (12.04)
- pure-python
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 34 |
Proposed branch: | lp:~patrick-hetu/charms/precise/python-django/pure-python |
Merge into: | lp:charms/python-django |
Diff against target: | 4208 lines (+2530/-726) 51 files modified .bzrignore (+2/-0) Makefile (+30/-0) bin/charm_helpers_sync.py (+225/-0) config.yaml (+7/-13) dev/ubuntu-deps (+17/-0) fabfile.py (+4/-1) hooks/charmhelpers/contrib/charmhelpers/IMPORT (+0/-4) hooks/charmhelpers/contrib/charmsupport/IMPORT (+0/-14) hooks/charmhelpers/contrib/hahelpers/apache.py (+9/-8) hooks/charmhelpers/contrib/hahelpers/cluster.py (+4/-4) hooks/charmhelpers/contrib/jujugui/IMPORT (+0/-4) hooks/charmhelpers/contrib/network/ip.py (+69/-0) hooks/charmhelpers/contrib/openstack/context.py (+215/-56) hooks/charmhelpers/contrib/openstack/neutron.py (+42/-8) hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+0/-11) hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+0/-37) hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+0/-23) hooks/charmhelpers/contrib/openstack/utils.py (+34/-20) hooks/charmhelpers/contrib/peerstorage/__init__.py (+83/-0) hooks/charmhelpers/contrib/python/packages.py (+76/-0) hooks/charmhelpers/contrib/python/version.py (+18/-0) hooks/charmhelpers/contrib/ssl/service.py (+267/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+6/-2) hooks/charmhelpers/contrib/storage/linux/utils.py (+12/-2) hooks/charmhelpers/contrib/templating/contexts.py (+46/-15) hooks/charmhelpers/contrib/unison/__init__.py (+257/-0) hooks/charmhelpers/core/hookenv.py (+6/-0) hooks/charmhelpers/core/host.py (+19/-3) hooks/charmhelpers/fetch/__init__.py (+40/-3) hooks/charmhelpers/fetch/archiveurl.py (+15/-0) hooks/hooks.py (+80/-39) hooks/tests/test_hooks.py (+181/-0) hooks/tests/test_template.py (+125/-0) hooks/tests/test_utils.py (+100/-0) metadata.yaml (+4/-3) playbooks/django_manage.yaml (+0/-5) playbooks/install.yaml (+0/-45) revision (+1/-1) tests/00-setup (+0/-13) tests/01-dj13 (+47/-0) tests/01-dj14 (+47/-0) tests/01-djdistro (+47/-0) tests/01_deploy.test (+0/-51) tests/10-bundle-test.py (+0/-33) tests/10-mysql (+58/-0) tests/10-postgresql (+58/-0) tests/bundles.yaml (+0/-30) tests/config/django.yaml (+98/-0) tests/helpers.py (+0/-278) tests/helpers/__init__.py (+136/-0) tests/jujulib/deployer.py (+45/-0) |
To merge this branch: | bzr merge lp:~patrick-hetu/charms/precise/python-django/pure-python |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Marco Ceppi (community) | Approve | ||
Cory Johns (community) | Approve | ||
Review Queue (community) | automated testing | Needs Fixing | |
Charles Butler (community) | Needs Fixing | ||
Tim Van Steenburgh (community) | Needs Fixing | ||
Whit Morriss (community) | Needs Fixing | ||
Review via email: |
Commit message
Description of the change
This makes the charm usable (on precise and trusty) until the Ansible version is ready.

Charles Butler (lazypower) wrote : | # |

Whit Morriss (whitmo) wrote : | # |
Howdy Patrick! Thanks for your work on this charm.
Small concerns:
- no hooks for lander-jenkins relationships
- make tests only runs the integration tests (but not the hook unit tests)
(would also be nice to have a requirements file for test dependencies)
Speaker with Chuck, there is some concern about switching back and forth between approaches.
-w

Tim Van Steenburgh (tvansteenburgh) wrote : | # |
Hi Patrick,
Thanks for working on this charm! Before proceeding with this review we'll need to get the tests passing. Here's what I found:
* `charm-proof` fails:
W: config.yaml: option django_
W: config.yaml: option django_
W: config.yaml: option unit-config does not have the keys: default
* `make lint` and `make test` fail. It's important to make these pass as they will be executed by our automated charm testing system.
* functional test failures:
01-dj13 Deployment timed out
10-mysql Deployment timed out
10-postgresql Install hook failure
The other functional tests passed for me. My testing was on lxc - if these tests pass for you on a different provider, please let me know.
Thanks again for your effort on this charm. Looking forward to continuing this review once the tests are fixed up!

Jorge Castro (jorge) wrote : | # |
Setting this status to "Work in progress", Patrick when you're ready for another round just change the MP status to "Needs Review", thanks!

Patrick Hetu (patrick-hetu) wrote : | # |
in revision 67 I fixed the tests and make the charm passing charm proof.

Charles Butler (lazypower) wrote : | # |
Sorry about the premature category change, i'm taking a look at this now since it was glossed over before in my review duties. Thanks Patrick!

Charles Butler (lazypower) wrote : | # |
Greetings Patrick,
I've taken some time to look over your proposed changes and I have the following feedback:
we use the utility `bundlester` (which is pip installable) to test charms that we've also leveraged in our CI infrastructure. This speicfically sniffs out make targets and executes them when it finds matching targets.
The proposed charm fails bundletester on the LINT and TEST target as previously mentioned by tvansteenburgh. Should you require additional help getting these targets passing ping me in #juju on irc.freenode.net (i am lazypower there as well)
I ran the deployment and things went smoothly. I was able to access the django admin panel as validation.
Once you have the other 2 issues shored up I see no huge blockers on this merge. Thank you for the submission and your patience during the review process.
If you have any questions/
- 68. By Patrick Hetu
-
fix test with bundletester

Patrick Hetu (patrick-hetu) wrote : | # |
Fix in 68:
For me the lint test pass. Both with bundletester and my makefile.
For the integration tests: 01-dj13, 01-dj14, 10-mysql and 10-postgresql
I'm not really sure why but bundletester is creating a virtualenv in the charm directory
but then my test script fail to copy that .venv directory to the temporary directory where it
runs the tests.
So now I'm excluding that .venv and all the tests are passing.
I've also commented the test directive in the makefile since bundletester will
run it and then rerun each tests individually.

Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://

Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://
- 69. By Patrick Hetu
-
fix small lint error

Patrick Hetu (patrick-hetu) wrote : | # |
I've fixed the lint error but it's really not clear to me why the other test are failing.

Cory Johns (johnsca) wrote : | # |
Patrick,
Thanks for your work on this charm!
It seems that the tests are failing on AWS and HPCloud due to the django.yaml bundle not including "expose: true" directives; the services get started up but can't be connected to because they are not exposed. They did all pass for me on local provider, where expose isn't necessary.
I also wanted to note that the tests indicate that the `juju ssh` command ought to be updated once https:/
- 70. By Patrick Hetu
-
exposing gunicorn's ports in the tests

Patrick Hetu (patrick-hetu) wrote : | # |
> It seems that the tests are failing on AWS and HPCloud due to the django.yaml
> bundle not including "expose: true" directives; the services get started up
> but can't be connected to because they are not exposed. They did all pass for
> me on local provider, where expose isn't necessary.
ah! I did not know about this one. I've fixed it in the latest commit of this branch.
> I also wanted to note that the tests indicate that the `juju ssh` command
> ought to be updated once https:/
> resolved, but it seems that was fixed several releases ago. I'm assuming this
> is carry-over from older tests, and it doesn't affect their function. Just
> wanted to mention it for future test changes.
ok
- 71. By Patrick Hetu
-
merge with trunk

Patrick Hetu (patrick-hetu) wrote : | # |
alright, really ready for a review now.

Cory Johns (johnsca) wrote : | # |
Tests now pass on AWS and it LGTM. +1
Thanks again, Patrick!

Charles Butler (lazypower) wrote : | # |
Added a comment inline
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2013-04-11 16:47:16 +0000 |
3 | +++ .bzrignore 2014-11-19 17:44:21 +0000 |
4 | @@ -3,3 +3,5 @@ |
5 | *.py[co] |
6 | *.sql |
7 | *.dump |
8 | +./.venv |
9 | +./result |
10 | |
11 | === added file 'Makefile' |
12 | --- Makefile 1970-01-01 00:00:00 +0000 |
13 | +++ Makefile 2014-11-19 17:44:21 +0000 |
14 | @@ -0,0 +1,30 @@ |
15 | +#!/usr/bin/make |
16 | +PYTHON := /usr/bin/env python |
17 | + |
18 | +#test: lint integration-test |
19 | + |
20 | +sync-charm-helpers: bin/charm_helpers_sync.py |
21 | + @mkdir -p bin |
22 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
23 | + |
24 | +bin/charm_helpers_sync.py: |
25 | + @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py > bin/charm_helpers_sync.py |
26 | + |
27 | +lint: |
28 | + @echo "Lint check (flake8)" |
29 | + @flake8 -v --ignore E501 --exclude hooks/charmhelpers hooks |
30 | + @charm-proof . |
31 | + |
32 | +verify-juju-test: |
33 | + @echo "Checking for ... " |
34 | + @echo -n "juju-test: " |
35 | + @if [ -z `which juju-test` ]; then \ |
36 | + echo -e "\nRun ./dev/ubuntu-deps to get the juju-test command installed"; \ |
37 | + exit 1;\ |
38 | + else \ |
39 | + echo "installed"; \ |
40 | + fi |
41 | + |
42 | +integration-test: |
43 | + juju test --set-e -p SKIP_SLOW_TESTS,DEPLOYER_TARGET,JUJU_HOME,JUJU_ENV -v --timeout 3000s |
44 | + |
45 | |
46 | === added directory 'bin' |
47 | === added file 'bin/charm_helpers_sync.py' |
48 | --- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000 |
49 | +++ bin/charm_helpers_sync.py 2014-11-19 17:44:21 +0000 |
50 | @@ -0,0 +1,225 @@ |
51 | +#!/usr/bin/python |
52 | +# |
53 | +# Copyright 2013 Canonical Ltd. |
54 | + |
55 | +# Authors: |
56 | +# Adam Gandelman <adamg@ubuntu.com> |
57 | +# |
58 | + |
59 | +import logging |
60 | +import optparse |
61 | +import os |
62 | +import subprocess |
63 | +import shutil |
64 | +import sys |
65 | +import tempfile |
66 | +import yaml |
67 | + |
68 | +from fnmatch import fnmatch |
69 | + |
70 | +CHARM_HELPERS_BRANCH = 'lp:charm-helpers' |
71 | + |
72 | + |
73 | +def parse_config(conf_file): |
74 | + if not os.path.isfile(conf_file): |
75 | + logging.error('Invalid config file: %s.' % conf_file) |
76 | + return False |
77 | + return yaml.load(open(conf_file).read()) |
78 | + |
79 | + |
80 | +def clone_helpers(work_dir, branch): |
81 | + dest = os.path.join(work_dir, 'charm-helpers') |
82 | + logging.info('Checking out %s to %s.' % (branch, dest)) |
83 | + cmd = ['bzr', 'branch', branch, dest] |
84 | + subprocess.check_call(cmd) |
85 | + return dest |
86 | + |
87 | + |
88 | +def _module_path(module): |
89 | + return os.path.join(*module.split('.')) |
90 | + |
91 | + |
92 | +def _src_path(src, module): |
93 | + return os.path.join(src, 'charmhelpers', _module_path(module)) |
94 | + |
95 | + |
96 | +def _dest_path(dest, module): |
97 | + return os.path.join(dest, _module_path(module)) |
98 | + |
99 | + |
100 | +def _is_pyfile(path): |
101 | + return os.path.isfile(path + '.py') |
102 | + |
103 | + |
104 | +def ensure_init(path): |
105 | + ''' |
106 | + ensure directories leading up to path are importable, omitting |
107 | + parent directory, eg path='/hooks/helpers/foo'/: |
108 | + hooks/ |
109 | + hooks/helpers/__init__.py |
110 | + hooks/helpers/foo/__init__.py |
111 | + ''' |
112 | + for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])): |
113 | + _i = os.path.join(d, '__init__.py') |
114 | + if not os.path.exists(_i): |
115 | + logging.info('Adding missing __init__.py: %s' % _i) |
116 | + open(_i, 'wb').close() |
117 | + |
118 | + |
119 | +def sync_pyfile(src, dest): |
120 | + src = src + '.py' |
121 | + src_dir = os.path.dirname(src) |
122 | + logging.info('Syncing pyfile: %s -> %s.' % (src, dest)) |
123 | + if not os.path.exists(dest): |
124 | + os.makedirs(dest) |
125 | + shutil.copy(src, dest) |
126 | + if os.path.isfile(os.path.join(src_dir, '__init__.py')): |
127 | + shutil.copy(os.path.join(src_dir, '__init__.py'), |
128 | + dest) |
129 | + ensure_init(dest) |
130 | + |
131 | + |
132 | +def get_filter(opts=None): |
133 | + opts = opts or [] |
134 | + if 'inc=*' in opts: |
135 | + # do not filter any files, include everything |
136 | + return None |
137 | + |
138 | + def _filter(dir, ls): |
139 | + incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt] |
140 | + _filter = [] |
141 | + for f in ls: |
142 | + _f = os.path.join(dir, f) |
143 | + |
144 | + if not os.path.isdir(_f) and not _f.endswith('.py') and incs: |
145 | + if True not in [fnmatch(_f, inc) for inc in incs]: |
146 | + logging.debug('Not syncing %s, does not match include ' |
147 | + 'filters (%s)' % (_f, incs)) |
148 | + _filter.append(f) |
149 | + else: |
150 | + logging.debug('Including file, which matches include ' |
151 | + 'filters (%s): %s' % (incs, _f)) |
152 | + elif (os.path.isfile(_f) and not _f.endswith('.py')): |
153 | + logging.debug('Not syncing file: %s' % f) |
154 | + _filter.append(f) |
155 | + elif (os.path.isdir(_f) and not |
156 | + os.path.isfile(os.path.join(_f, '__init__.py'))): |
157 | + logging.debug('Not syncing directory: %s' % f) |
158 | + _filter.append(f) |
159 | + return _filter |
160 | + return _filter |
161 | + |
162 | + |
163 | +def sync_directory(src, dest, opts=None): |
164 | + if os.path.exists(dest): |
165 | + logging.debug('Removing existing directory: %s' % dest) |
166 | + shutil.rmtree(dest) |
167 | + logging.info('Syncing directory: %s -> %s.' % (src, dest)) |
168 | + |
169 | + shutil.copytree(src, dest, ignore=get_filter(opts)) |
170 | + ensure_init(dest) |
171 | + |
172 | + |
173 | +def sync(src, dest, module, opts=None): |
174 | + if os.path.isdir(_src_path(src, module)): |
175 | + sync_directory(_src_path(src, module), _dest_path(dest, module), opts) |
176 | + elif _is_pyfile(_src_path(src, module)): |
177 | + sync_pyfile(_src_path(src, module), |
178 | + os.path.dirname(_dest_path(dest, module))) |
179 | + else: |
180 | + logging.warn('Could not sync: %s. Neither a pyfile or directory, ' |
181 | + 'does it even exist?' % module) |
182 | + |
183 | + |
184 | +def parse_sync_options(options): |
185 | + if not options: |
186 | + return [] |
187 | + return options.split(',') |
188 | + |
189 | + |
190 | +def extract_options(inc, global_options=None): |
191 | + global_options = global_options or [] |
192 | + if global_options and isinstance(global_options, basestring): |
193 | + global_options = [global_options] |
194 | + if '|' not in inc: |
195 | + return (inc, global_options) |
196 | + inc, opts = inc.split('|') |
197 | + return (inc, parse_sync_options(opts) + global_options) |
198 | + |
199 | + |
200 | +def sync_helpers(include, src, dest, options=None): |
201 | + if not os.path.isdir(dest): |
202 | + os.mkdir(dest) |
203 | + |
204 | + global_options = parse_sync_options(options) |
205 | + |
206 | + for inc in include: |
207 | + if isinstance(inc, str): |
208 | + inc, opts = extract_options(inc, global_options) |
209 | + sync(src, dest, inc, opts) |
210 | + elif isinstance(inc, dict): |
211 | + # could also do nested dicts here. |
212 | + for k, v in inc.iteritems(): |
213 | + if isinstance(v, list): |
214 | + for m in v: |
215 | + inc, opts = extract_options(m, global_options) |
216 | + sync(src, dest, '%s.%s' % (k, inc), opts) |
217 | + |
218 | +if __name__ == '__main__': |
219 | + parser = optparse.OptionParser() |
220 | + parser.add_option('-c', '--config', action='store', dest='config', |
221 | + default=None, help='helper config file') |
222 | + parser.add_option('-D', '--debug', action='store_true', dest='debug', |
223 | + default=False, help='debug') |
224 | + parser.add_option('-b', '--branch', action='store', dest='branch', |
225 | + help='charm-helpers bzr branch (overrides config)') |
226 | + parser.add_option('-d', '--destination', action='store', dest='dest_dir', |
227 | + help='sync destination dir (overrides config)') |
228 | + (opts, args) = parser.parse_args() |
229 | + |
230 | + if opts.debug: |
231 | + logging.basicConfig(level=logging.DEBUG) |
232 | + else: |
233 | + logging.basicConfig(level=logging.INFO) |
234 | + |
235 | + if opts.config: |
236 | + logging.info('Loading charm helper config from %s.' % opts.config) |
237 | + config = parse_config(opts.config) |
238 | + if not config: |
239 | + logging.error('Could not parse config from %s.' % opts.config) |
240 | + sys.exit(1) |
241 | + else: |
242 | + config = {} |
243 | + |
244 | + if 'branch' not in config: |
245 | + config['branch'] = CHARM_HELPERS_BRANCH |
246 | + if opts.branch: |
247 | + config['branch'] = opts.branch |
248 | + if opts.dest_dir: |
249 | + config['destination'] = opts.dest_dir |
250 | + |
251 | + if 'destination' not in config: |
252 | + logging.error('No destination dir. specified as option or config.') |
253 | + sys.exit(1) |
254 | + |
255 | + if 'include' not in config: |
256 | + if not args: |
257 | + logging.error('No modules to sync specified as option or config.') |
258 | + sys.exit(1) |
259 | + config['include'] = [] |
260 | + [config['include'].append(a) for a in args] |
261 | + |
262 | + sync_options = None |
263 | + if 'options' in config: |
264 | + sync_options = config['options'] |
265 | + tmpd = tempfile.mkdtemp() |
266 | + try: |
267 | + checkout = clone_helpers(tmpd, config['branch']) |
268 | + sync_helpers(config['include'], checkout, config['destination'], |
269 | + options=sync_options) |
270 | + except Exception, e: |
271 | + logging.error("Could not sync: %s" % e) |
272 | + raise e |
273 | + finally: |
274 | + logging.debug('Cleaning up %s' % tmpd) |
275 | + shutil.rmtree(tmpd) |
276 | |
277 | === modified file 'config.yaml' |
278 | --- config.yaml 2014-10-31 14:29:19 +0000 |
279 | +++ config.yaml 2014-11-19 17:44:21 +0000 |
280 | @@ -74,14 +74,6 @@ |
281 | description: | |
282 | The relative path to install_root where the manage.py |
283 | script is located. |
284 | - unit-config: |
285 | - type: string |
286 | - default: |
287 | - description: | |
288 | - base64 encoded string to hold configuration information for the unit. |
289 | - The contents will be written to a file named |
290 | - <install_root>/<unit>/unit_config |
291 | - where <unit> is the location the branch is extracted to. |
292 | additional_distro_packages: |
293 | type: string |
294 | default: "python-imaging,python-docutils,python-tz" |
295 | @@ -137,17 +129,19 @@ |
296 | Enable disable settings.DEBUG for django |
297 | django_allowed_hosts: |
298 | type: string |
299 | + default: "" |
300 | description: | |
301 | A space separated list for settings.ALLOWED_HOSTS in django. Default |
302 | value will be the hostname, fully-qualified name, and public IP. |
303 | - default: |
304 | + default: "" |
305 | django_extra_settings: |
306 | type: string |
307 | + default: "" |
308 | description: | |
309 | Allows setting up extra settings.* values for Django. Acceptable |
310 | values are limited to comma delimited key-value pairs like: |
311 | SETTING_X=foo, SETTING_Y=bar |
312 | - default: |
313 | + default: "" |
314 | urls_dir_name: |
315 | type: string |
316 | default: "juju_urls" |
317 | @@ -174,19 +168,19 @@ |
318 | This is relative to the settings_dir_name path define earlier. |
319 | settings_database_name: |
320 | type: string |
321 | - default: "20-engine-%(engine_name)s.py" |
322 | + default: "60-%(engine_name)s.py" |
323 | description: | |
324 | The place where the database configuration will be appended or written. |
325 | Set the variable to an empty string if you don't want the feature. |
326 | settings_secret_key_name: |
327 | type: string |
328 | - default: "10-secret.py" |
329 | + default: "60-secret.py" |
330 | description: | |
331 | The place where the secret key configuration will be appended or written. |
332 | Set the variable to an empty string if you don't want the feature. |
333 | settings_amqp_name: |
334 | type: string |
335 | - default: "20-amqp.py" |
336 | + default: "60-amqp.py" |
337 | description: | |
338 | The place where the amqp configuration will be appended or written. |
339 | celery_always_eager: |
340 | |
341 | === added directory 'dev' |
342 | === added file 'dev/ubuntu-deps' |
343 | --- dev/ubuntu-deps 1970-01-01 00:00:00 +0000 |
344 | +++ dev/ubuntu-deps 2014-11-19 17:44:21 +0000 |
345 | @@ -0,0 +1,17 @@ |
346 | +#!/bin/bash -e |
347 | +# Needs to be run as a user who can sudo. d'oh! |
348 | +# It will ask your password a lot. |
349 | + |
350 | +# Install add-apt-repository (packaging differs across releases). |
351 | +lsb_release -r | grep -q 12.04 \ |
352 | + && sudo apt-get -y install python-software-properties \ |
353 | + || sudo apt-get -y install software-properties-common |
354 | + |
355 | +# Add the juju stable ppa, install charm-tools (juju-test plugin) and other deps |
356 | +sudo add-apt-repository -y ppa:juju/stable |
357 | +sudo apt-get update |
358 | +sudo apt-get -y install juju-deployer juju-core charm-tools python3 python3-yaml python-flake8 |
359 | + |
360 | +# python3-flake8 was introduced after 12.04. Releases prior to that are not |
361 | +# supported. |
362 | +lsb_release -r | grep -q 12.04 || sudo apt-get -y install python3-flake8 |
363 | |
364 | === modified file 'fabfile.py' |
365 | --- fabfile.py 2014-04-24 19:06:46 +0000 |
366 | +++ fabfile.py 2014-11-19 17:44:21 +0000 |
367 | @@ -70,6 +70,9 @@ |
368 | env.conf = _config_get(env.service_name) |
369 | if not env.conf['django_settings']: |
370 | django_settings_modules = '.'.join([env.sanitized_service_name, 'settings']) |
371 | + if env.conf['application_path']: |
372 | + django_settings_modules = '.'.join([os.path.basename(env.conf['application_path']), |
373 | + 'settings']) |
374 | else: |
375 | django_settings_modules = env.conf['django_settings'] |
376 | |
377 | @@ -189,7 +192,7 @@ |
378 | django_admin_cmd = _find_django_admin_cmd() |
379 | |
380 | with cd(env.site_path): |
381 | - with shell_env(PYTHONPATH=':'.join([env.project_dir, env.python_path])): |
382 | + with shell_env(PYTHONPATH=':'.join([os.path.join(env.site_path, '../'), env.python_path])): |
383 | sudo('%s %s --settings=%s' % |
384 | (django_admin_cmd, command, django_settings_modules), |
385 | user=env.conf['wsgi_user']) |
386 | |
387 | === removed file 'hooks/charmhelpers/contrib/charmhelpers/IMPORT' |
388 | --- hooks/charmhelpers/contrib/charmhelpers/IMPORT 2013-11-26 17:12:54 +0000 |
389 | +++ hooks/charmhelpers/contrib/charmhelpers/IMPORT 1970-01-01 00:00:00 +0000 |
390 | @@ -1,4 +0,0 @@ |
391 | -Source lp:charm-tools/trunk |
392 | - |
393 | -charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py |
394 | -charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py |
395 | |
396 | === removed file 'hooks/charmhelpers/contrib/charmsupport/IMPORT' |
397 | --- hooks/charmhelpers/contrib/charmsupport/IMPORT 2013-11-26 17:12:54 +0000 |
398 | +++ hooks/charmhelpers/contrib/charmsupport/IMPORT 1970-01-01 00:00:00 +0000 |
399 | @@ -1,14 +0,0 @@ |
400 | -Source: lp:charmsupport/trunk |
401 | - |
402 | -charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py |
403 | -charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py |
404 | -charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py |
405 | -charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py |
406 | -charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py |
407 | - |
408 | -charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py |
409 | -charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py |
410 | -charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py |
411 | -charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py |
412 | - |
413 | -charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport |
414 | |
415 | === modified file 'hooks/charmhelpers/contrib/hahelpers/apache.py' |
416 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 2013-11-26 17:12:54 +0000 |
417 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 2014-11-19 17:44:21 +0000 |
418 | @@ -39,14 +39,15 @@ |
419 | |
420 | |
421 | def get_ca_cert(): |
422 | - ca_cert = None |
423 | - log("Inspecting identity-service relations for CA SSL certificate.", |
424 | - level=INFO) |
425 | - for r_id in relation_ids('identity-service'): |
426 | - for unit in relation_list(r_id): |
427 | - if not ca_cert: |
428 | - ca_cert = relation_get('ca_cert', |
429 | - rid=r_id, unit=unit) |
430 | + ca_cert = config_get('ssl_ca') |
431 | + if ca_cert is None: |
432 | + log("Inspecting identity-service relations for CA SSL certificate.", |
433 | + level=INFO) |
434 | + for r_id in relation_ids('identity-service'): |
435 | + for unit in relation_list(r_id): |
436 | + if ca_cert is None: |
437 | + ca_cert = relation_get('ca_cert', |
438 | + rid=r_id, unit=unit) |
439 | return ca_cert |
440 | |
441 | |
442 | |
443 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
444 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-11-26 17:12:54 +0000 |
445 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-11-19 17:44:21 +0000 |
446 | @@ -126,17 +126,17 @@ |
447 | return public_port - (i * 10) |
448 | |
449 | |
450 | -def determine_haproxy_port(public_port): |
451 | +def determine_apache_port(public_port): |
452 | ''' |
453 | - Description: Determine correct proxy listening port based on public IP + |
454 | - existence of HTTPS reverse proxy. |
455 | + Description: Determine correct apache listening port based on public IP + |
456 | + state of the cluster. |
457 | |
458 | public_port: int: standard public port for given service |
459 | |
460 | returns: int: the correct listening port for the HAProxy service |
461 | ''' |
462 | i = 0 |
463 | - if https(): |
464 | + if len(peer_units()) > 0 or is_clustered(): |
465 | i += 1 |
466 | return public_port - (i * 10) |
467 | |
468 | |
469 | === removed file 'hooks/charmhelpers/contrib/jujugui/IMPORT' |
470 | --- hooks/charmhelpers/contrib/jujugui/IMPORT 2013-11-26 17:12:54 +0000 |
471 | +++ hooks/charmhelpers/contrib/jujugui/IMPORT 1970-01-01 00:00:00 +0000 |
472 | @@ -1,4 +0,0 @@ |
473 | -Source: lp:charms/juju-gui |
474 | - |
475 | -juju-gui/hooks/utils.py -> charm-helpers/charmhelpers/contrib/jujugui/utils.py |
476 | -juju-gui/tests/test_utils.py -> charm-helpers/tests/contrib/jujugui/test_utils.py |
477 | |
478 | === added file 'hooks/charmhelpers/contrib/network/ip.py' |
479 | --- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000 |
480 | +++ hooks/charmhelpers/contrib/network/ip.py 2014-11-19 17:44:21 +0000 |
481 | @@ -0,0 +1,69 @@ |
482 | +import sys |
483 | + |
484 | +from charmhelpers.fetch import apt_install |
485 | +from charmhelpers.core.hookenv import ( |
486 | + ERROR, log, |
487 | +) |
488 | + |
489 | +try: |
490 | + import netifaces |
491 | +except ImportError: |
492 | + apt_install('python-netifaces') |
493 | + import netifaces |
494 | + |
495 | +try: |
496 | + import netaddr |
497 | +except ImportError: |
498 | + apt_install('python-netaddr') |
499 | + import netaddr |
500 | + |
501 | + |
502 | +def _validate_cidr(network): |
503 | + try: |
504 | + netaddr.IPNetwork(network) |
505 | + except (netaddr.core.AddrFormatError, ValueError): |
506 | + raise ValueError("Network (%s) is not in CIDR presentation format" % |
507 | + network) |
508 | + |
509 | + |
510 | +def get_address_in_network(network, fallback=None, fatal=False): |
511 | + """ |
512 | + Get an IPv4 address within the network from the host. |
513 | + |
514 | + Args: |
515 | + network (str): CIDR presentation format. For example, |
516 | + '192.168.1.0/24'. |
517 | + fallback (str): If no address is found, return fallback. |
518 | + fatal (boolean): If no address is found, fallback is not |
519 | + set and fatal is True then exit(1). |
520 | + """ |
521 | + |
522 | + def not_found_error_out(): |
523 | + log("No IP address found in network: %s" % network, |
524 | + level=ERROR) |
525 | + sys.exit(1) |
526 | + |
527 | + if network is None: |
528 | + if fallback is not None: |
529 | + return fallback |
530 | + else: |
531 | + if fatal: |
532 | + not_found_error_out() |
533 | + |
534 | + _validate_cidr(network) |
535 | + for iface in netifaces.interfaces(): |
536 | + addresses = netifaces.ifaddresses(iface) |
537 | + if netifaces.AF_INET in addresses: |
538 | + addr = addresses[netifaces.AF_INET][0]['addr'] |
539 | + netmask = addresses[netifaces.AF_INET][0]['netmask'] |
540 | + cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
541 | + if cidr in netaddr.IPNetwork(network): |
542 | + return str(cidr.ip) |
543 | + |
544 | + if fallback is not None: |
545 | + return fallback |
546 | + |
547 | + if fatal: |
548 | + not_found_error_out() |
549 | + |
550 | + return None |
551 | |
552 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
553 | --- hooks/charmhelpers/contrib/openstack/context.py 2013-11-26 17:12:54 +0000 |
554 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-11-19 17:44:21 +0000 |
555 | @@ -1,5 +1,6 @@ |
556 | import json |
557 | import os |
558 | +import time |
559 | |
560 | from base64 import b64decode |
561 | |
562 | @@ -23,15 +24,13 @@ |
563 | unit_get, |
564 | unit_private_ip, |
565 | ERROR, |
566 | - WARNING, |
567 | ) |
568 | |
569 | from charmhelpers.contrib.hahelpers.cluster import ( |
570 | + determine_apache_port, |
571 | determine_api_port, |
572 | - determine_haproxy_port, |
573 | https, |
574 | - is_clustered, |
575 | - peer_units, |
576 | + is_clustered |
577 | ) |
578 | |
579 | from charmhelpers.contrib.hahelpers.apache import ( |
580 | @@ -68,6 +67,43 @@ |
581 | return True |
582 | |
583 | |
584 | +def config_flags_parser(config_flags): |
585 | + if config_flags.find('==') >= 0: |
586 | + log("config_flags is not in expected format (key=value)", |
587 | + level=ERROR) |
588 | + raise OSContextError |
589 | + # strip the following from each value. |
590 | + post_strippers = ' ,' |
591 | + # we strip any leading/trailing '=' or ' ' from the string then |
592 | + # split on '='. |
593 | + split = config_flags.strip(' =').split('=') |
594 | + limit = len(split) |
595 | + flags = {} |
596 | + for i in xrange(0, limit - 1): |
597 | + current = split[i] |
598 | + next = split[i + 1] |
599 | + vindex = next.rfind(',') |
600 | + if (i == limit - 2) or (vindex < 0): |
601 | + value = next |
602 | + else: |
603 | + value = next[:vindex] |
604 | + |
605 | + if i == 0: |
606 | + key = current |
607 | + else: |
608 | + # if this not the first entry, expect an embedded key. |
609 | + index = current.rfind(',') |
610 | + if index < 0: |
611 | + log("invalid config value(s) at index %s" % (i), |
612 | + level=ERROR) |
613 | + raise OSContextError |
614 | + key = current[index + 1:] |
615 | + |
616 | + # Add to collection. |
617 | + flags[key.strip(post_strippers)] = value.rstrip(post_strippers) |
618 | + return flags |
619 | + |
620 | + |
621 | class OSContextGenerator(object): |
622 | interfaces = [] |
623 | |
624 | @@ -78,7 +114,8 @@ |
625 | class SharedDBContext(OSContextGenerator): |
626 | interfaces = ['shared-db'] |
627 | |
628 | - def __init__(self, database=None, user=None, relation_prefix=None): |
629 | + def __init__(self, |
630 | + database=None, user=None, relation_prefix=None, ssl_dir=None): |
631 | ''' |
632 | Allows inspecting relation for settings prefixed with relation_prefix. |
633 | This is useful for parsing access for multiple databases returned via |
634 | @@ -87,6 +124,7 @@ |
635 | self.relation_prefix = relation_prefix |
636 | self.database = database |
637 | self.user = user |
638 | + self.ssl_dir = ssl_dir |
639 | |
640 | def __call__(self): |
641 | self.database = self.database or config('database') |
642 | @@ -104,17 +142,72 @@ |
643 | |
644 | for rid in relation_ids('shared-db'): |
645 | for unit in related_units(rid): |
646 | - passwd = relation_get(password_setting, rid=rid, unit=unit) |
647 | + rdata = relation_get(rid=rid, unit=unit) |
648 | ctxt = { |
649 | - 'database_host': relation_get('db_host', rid=rid, |
650 | - unit=unit), |
651 | + 'database_host': rdata.get('db_host'), |
652 | 'database': self.database, |
653 | 'database_user': self.user, |
654 | - 'database_password': passwd, |
655 | - } |
656 | - if context_complete(ctxt): |
657 | - return ctxt |
658 | - return {} |
659 | + 'database_password': rdata.get(password_setting), |
660 | + 'database_type': 'mysql' |
661 | + } |
662 | + if context_complete(ctxt): |
663 | + db_ssl(rdata, ctxt, self.ssl_dir) |
664 | + return ctxt |
665 | + return {} |
666 | + |
667 | + |
668 | +class PostgresqlDBContext(OSContextGenerator): |
669 | + interfaces = ['pgsql-db'] |
670 | + |
671 | + def __init__(self, database=None): |
672 | + self.database = database |
673 | + |
674 | + def __call__(self): |
675 | + self.database = self.database or config('database') |
676 | + if self.database is None: |
677 | + log('Could not generate postgresql_db context. ' |
678 | + 'Missing required charm config options. ' |
679 | + '(database name)') |
680 | + raise OSContextError |
681 | + ctxt = {} |
682 | + |
683 | + for rid in relation_ids(self.interfaces[0]): |
684 | + for unit in related_units(rid): |
685 | + ctxt = { |
686 | + 'database_host': relation_get('host', rid=rid, unit=unit), |
687 | + 'database': self.database, |
688 | + 'database_user': relation_get('user', rid=rid, unit=unit), |
689 | + 'database_password': relation_get('password', rid=rid, unit=unit), |
690 | + 'database_type': 'postgresql', |
691 | + } |
692 | + if context_complete(ctxt): |
693 | + return ctxt |
694 | + return {} |
695 | + |
696 | + |
697 | +def db_ssl(rdata, ctxt, ssl_dir): |
698 | + if 'ssl_ca' in rdata and ssl_dir: |
699 | + ca_path = os.path.join(ssl_dir, 'db-client.ca') |
700 | + with open(ca_path, 'w') as fh: |
701 | + fh.write(b64decode(rdata['ssl_ca'])) |
702 | + ctxt['database_ssl_ca'] = ca_path |
703 | + elif 'ssl_ca' in rdata: |
704 | + log("Charm not setup for ssl support but ssl ca found") |
705 | + return ctxt |
706 | + if 'ssl_cert' in rdata: |
707 | + cert_path = os.path.join( |
708 | + ssl_dir, 'db-client.cert') |
709 | + if not os.path.exists(cert_path): |
710 | + log("Waiting 1m for ssl client cert validity") |
711 | + time.sleep(60) |
712 | + with open(cert_path, 'w') as fh: |
713 | + fh.write(b64decode(rdata['ssl_cert'])) |
714 | + ctxt['database_ssl_cert'] = cert_path |
715 | + key_path = os.path.join(ssl_dir, 'db-client.key') |
716 | + with open(key_path, 'w') as fh: |
717 | + fh.write(b64decode(rdata['ssl_key'])) |
718 | + ctxt['database_ssl_key'] = key_path |
719 | + return ctxt |
720 | |
721 | |
722 | class IdentityServiceContext(OSContextGenerator): |
723 | @@ -126,24 +219,25 @@ |
724 | |
725 | for rid in relation_ids('identity-service'): |
726 | for unit in related_units(rid): |
727 | + rdata = relation_get(rid=rid, unit=unit) |
728 | ctxt = { |
729 | - 'service_port': relation_get('service_port', rid=rid, |
730 | - unit=unit), |
731 | - 'service_host': relation_get('service_host', rid=rid, |
732 | - unit=unit), |
733 | - 'auth_host': relation_get('auth_host', rid=rid, unit=unit), |
734 | - 'auth_port': relation_get('auth_port', rid=rid, unit=unit), |
735 | - 'admin_tenant_name': relation_get('service_tenant', |
736 | - rid=rid, unit=unit), |
737 | - 'admin_user': relation_get('service_username', rid=rid, |
738 | - unit=unit), |
739 | - 'admin_password': relation_get('service_password', rid=rid, |
740 | - unit=unit), |
741 | - # XXX: Hard-coded http. |
742 | - 'service_protocol': 'http', |
743 | - 'auth_protocol': 'http', |
744 | + 'service_port': rdata.get('service_port'), |
745 | + 'service_host': rdata.get('service_host'), |
746 | + 'auth_host': rdata.get('auth_host'), |
747 | + 'auth_port': rdata.get('auth_port'), |
748 | + 'admin_tenant_name': rdata.get('service_tenant'), |
749 | + 'admin_user': rdata.get('service_username'), |
750 | + 'admin_password': rdata.get('service_password'), |
751 | + 'service_protocol': |
752 | + rdata.get('service_protocol') or 'http', |
753 | + 'auth_protocol': |
754 | + rdata.get('auth_protocol') or 'http', |
755 | } |
756 | if context_complete(ctxt): |
757 | + # NOTE(jamespage) this is required for >= icehouse |
758 | + # so a missing value just indicates keystone needs |
759 | + # upgrading |
760 | + ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') |
761 | return ctxt |
762 | return {} |
763 | |
764 | @@ -151,6 +245,9 @@ |
765 | class AMQPContext(OSContextGenerator): |
766 | interfaces = ['amqp'] |
767 | |
768 | + def __init__(self, ssl_dir=None): |
769 | + self.ssl_dir = ssl_dir |
770 | + |
771 | def __call__(self): |
772 | log('Generating template context for amqp') |
773 | conf = config() |
774 | @@ -161,9 +258,9 @@ |
775 | log('Could not generate shared_db context. ' |
776 | 'Missing required charm config options: %s.' % e) |
777 | raise OSContextError |
778 | - |
779 | ctxt = {} |
780 | for rid in relation_ids('amqp'): |
781 | + ha_vip_only = False |
782 | for unit in related_units(rid): |
783 | if relation_get('clustered', rid=rid, unit=unit): |
784 | ctxt['clustered'] = True |
785 | @@ -178,14 +275,41 @@ |
786 | unit=unit), |
787 | 'rabbitmq_virtual_host': vhost, |
788 | }) |
789 | + |
790 | + ssl_port = relation_get('ssl_port', rid=rid, unit=unit) |
791 | + if ssl_port: |
792 | + ctxt['rabbit_ssl_port'] = ssl_port |
793 | + ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) |
794 | + if ssl_ca: |
795 | + ctxt['rabbit_ssl_ca'] = ssl_ca |
796 | + |
797 | + if relation_get('ha_queues', rid=rid, unit=unit) is not None: |
798 | + ctxt['rabbitmq_ha_queues'] = True |
799 | + |
800 | + ha_vip_only = relation_get('ha-vip-only', |
801 | + rid=rid, unit=unit) is not None |
802 | + |
803 | if context_complete(ctxt): |
804 | + if 'rabbit_ssl_ca' in ctxt: |
805 | + if not self.ssl_dir: |
806 | + log(("Charm not setup for ssl support " |
807 | + "but ssl ca found")) |
808 | + break |
809 | + ca_path = os.path.join( |
810 | + self.ssl_dir, 'rabbit-client-ca.pem') |
811 | + with open(ca_path, 'w') as fh: |
812 | + fh.write(b64decode(ctxt['rabbit_ssl_ca'])) |
813 | + ctxt['rabbit_ssl_ca'] = ca_path |
814 | # Sufficient information found = break out! |
815 | break |
816 | # Used for active/active rabbitmq >= grizzly |
817 | - ctxt['rabbitmq_hosts'] = [] |
818 | - for unit in related_units(rid): |
819 | - ctxt['rabbitmq_hosts'].append(relation_get('private-address', |
820 | - rid=rid, unit=unit)) |
821 | + if ('clustered' not in ctxt or ha_vip_only) \ |
822 | + and len(related_units(rid)) > 1: |
823 | + rabbitmq_hosts = [] |
824 | + for unit in related_units(rid): |
825 | + rabbitmq_hosts.append(relation_get('private-address', |
826 | + rid=rid, unit=unit)) |
827 | + ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) |
828 | if not context_complete(ctxt): |
829 | return {} |
830 | else: |
831 | @@ -199,10 +323,13 @@ |
832 | '''This generates context for /etc/ceph/ceph.conf templates''' |
833 | if not relation_ids('ceph'): |
834 | return {} |
835 | + |
836 | log('Generating template context for ceph') |
837 | + |
838 | mon_hosts = [] |
839 | auth = None |
840 | key = None |
841 | + use_syslog = str(config('use-syslog')).lower() |
842 | for rid in relation_ids('ceph'): |
843 | for unit in related_units(rid): |
844 | mon_hosts.append(relation_get('private-address', rid=rid, |
845 | @@ -214,6 +341,7 @@ |
846 | 'mon_hosts': ' '.join(mon_hosts), |
847 | 'auth': auth, |
848 | 'key': key, |
849 | + 'use_syslog': use_syslog |
850 | } |
851 | |
852 | if not os.path.isdir('/etc/ceph'): |
853 | @@ -286,6 +414,7 @@ |
854 | |
855 | |
856 | class ApacheSSLContext(OSContextGenerator): |
857 | + |
858 | """ |
859 | Generates a context for an apache vhost configuration that configures |
860 | HTTPS reverse proxying for one or many endpoints. Generated context |
861 | @@ -341,17 +470,17 @@ |
862 | 'private_address': unit_get('private-address'), |
863 | 'endpoints': [] |
864 | } |
865 | - for ext_port in self.external_ports: |
866 | - if peer_units() or is_clustered(): |
867 | - int_port = determine_haproxy_port(ext_port) |
868 | - else: |
869 | - int_port = determine_api_port(ext_port) |
870 | + if is_clustered(): |
871 | + ctxt['private_address'] = config('vip') |
872 | + for api_port in self.external_ports: |
873 | + ext_port = determine_apache_port(api_port) |
874 | + int_port = determine_api_port(api_port) |
875 | portmap = (int(ext_port), int(int_port)) |
876 | ctxt['endpoints'].append(portmap) |
877 | return ctxt |
878 | |
879 | |
880 | -class NeutronContext(object): |
881 | +class NeutronContext(OSContextGenerator): |
882 | interfaces = [] |
883 | |
884 | @property |
885 | @@ -412,6 +541,22 @@ |
886 | |
887 | return nvp_ctxt |
888 | |
889 | + def neutron_ctxt(self): |
890 | + if https(): |
891 | + proto = 'https' |
892 | + else: |
893 | + proto = 'http' |
894 | + if is_clustered(): |
895 | + host = config('vip') |
896 | + else: |
897 | + host = unit_get('private-address') |
898 | + url = '%s://%s:%s' % (proto, host, '9696') |
899 | + ctxt = { |
900 | + 'network_manager': self.network_manager, |
901 | + 'neutron_url': url, |
902 | + } |
903 | + return ctxt |
904 | + |
905 | def __call__(self): |
906 | self._ensure_packages() |
907 | |
908 | @@ -421,40 +566,44 @@ |
909 | if not self.plugin: |
910 | return {} |
911 | |
912 | - ctxt = {'network_manager': self.network_manager} |
913 | + ctxt = self.neutron_ctxt() |
914 | |
915 | if self.plugin == 'ovs': |
916 | ctxt.update(self.ovs_ctxt()) |
917 | elif self.plugin == 'nvp': |
918 | ctxt.update(self.nvp_ctxt()) |
919 | |
920 | + alchemy_flags = config('neutron-alchemy-flags') |
921 | + if alchemy_flags: |
922 | + flags = config_flags_parser(alchemy_flags) |
923 | + ctxt['neutron_alchemy_flags'] = flags |
924 | + |
925 | self._save_flag_file() |
926 | return ctxt |
927 | |
928 | |
929 | class OSConfigFlagContext(OSContextGenerator): |
930 | - ''' |
931 | - Responsible adding user-defined config-flags in charm config to a |
932 | - to a template context. |
933 | - ''' |
934 | + |
935 | + """ |
936 | + Responsible for adding user-defined config-flags in charm config to a |
937 | + template context. |
938 | + |
939 | + NOTE: the value of config-flags may be a comma-separated list of |
940 | + key=value pairs and some Openstack config files support |
941 | + comma-separated lists as values. |
942 | + """ |
943 | + |
944 | def __call__(self): |
945 | config_flags = config('config-flags') |
946 | - if not config_flags or config_flags in ['None', '']: |
947 | + if not config_flags: |
948 | return {} |
949 | - config_flags = config_flags.split(',') |
950 | - flags = {} |
951 | - for flag in config_flags: |
952 | - if '=' not in flag: |
953 | - log('Improperly formatted config-flag, expected k=v ' |
954 | - 'got %s' % flag, level=WARNING) |
955 | - continue |
956 | - k, v = flag.split('=') |
957 | - flags[k.strip()] = v |
958 | - ctxt = {'user_config_flags': flags} |
959 | - return ctxt |
960 | + |
961 | + flags = config_flags_parser(config_flags) |
962 | + return {'user_config_flags': flags} |
963 | |
964 | |
965 | class SubordinateConfigContext(OSContextGenerator): |
966 | + |
967 | """ |
968 | Responsible for inspecting relations to subordinates that |
969 | may be exporting required config via a json blob. |
970 | @@ -495,6 +644,7 @@ |
971 | } |
972 | |
973 | """ |
974 | + |
975 | def __init__(self, service, config_file, interface): |
976 | """ |
977 | :param service : Service name key to query in any subordinate |
978 | @@ -539,3 +689,12 @@ |
979 | ctxt['sections'] = {} |
980 | |
981 | return ctxt |
982 | + |
983 | + |
984 | +class SyslogContext(OSContextGenerator): |
985 | + |
986 | + def __call__(self): |
987 | + ctxt = { |
988 | + 'use_syslog': config('use-syslog') |
989 | + } |
990 | + return ctxt |
991 | |
992 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
993 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2013-11-26 17:12:54 +0000 |
994 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-11-19 17:44:21 +0000 |
995 | @@ -17,8 +17,28 @@ |
996 | kver = check_output(['uname', '-r']).strip() |
997 | return 'linux-headers-%s' % kver |
998 | |
999 | +QUANTUM_CONF_DIR = '/etc/quantum' |
1000 | + |
1001 | + |
1002 | +def kernel_version(): |
1003 | + """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ |
1004 | + kver = check_output(['uname', '-r']).strip() |
1005 | + kver = kver.split('.') |
1006 | + return (int(kver[0]), int(kver[1])) |
1007 | + |
1008 | + |
1009 | +def determine_dkms_package(): |
1010 | + """ Determine which DKMS package should be used based on kernel version """ |
1011 | + # NOTE: 3.13 kernels have support for GRE and VXLAN native |
1012 | + if kernel_version() >= (3, 13): |
1013 | + return [] |
1014 | + else: |
1015 | + return ['openvswitch-datapath-dkms'] |
1016 | + |
1017 | |
1018 | # legacy |
1019 | + |
1020 | + |
1021 | def quantum_plugins(): |
1022 | from charmhelpers.contrib.openstack import context |
1023 | return { |
1024 | @@ -30,9 +50,10 @@ |
1025 | 'contexts': [ |
1026 | context.SharedDBContext(user=config('neutron-database-user'), |
1027 | database=config('neutron-database'), |
1028 | - relation_prefix='neutron')], |
1029 | + relation_prefix='neutron', |
1030 | + ssl_dir=QUANTUM_CONF_DIR)], |
1031 | 'services': ['quantum-plugin-openvswitch-agent'], |
1032 | - 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], |
1033 | + 'packages': [[headers_package()] + determine_dkms_package(), |
1034 | ['quantum-plugin-openvswitch-agent']], |
1035 | 'server_packages': ['quantum-server', |
1036 | 'quantum-plugin-openvswitch'], |
1037 | @@ -45,7 +66,8 @@ |
1038 | 'contexts': [ |
1039 | context.SharedDBContext(user=config('neutron-database-user'), |
1040 | database=config('neutron-database'), |
1041 | - relation_prefix='neutron')], |
1042 | + relation_prefix='neutron', |
1043 | + ssl_dir=QUANTUM_CONF_DIR)], |
1044 | 'services': [], |
1045 | 'packages': [], |
1046 | 'server_packages': ['quantum-server', |
1047 | @@ -54,10 +76,13 @@ |
1048 | } |
1049 | } |
1050 | |
1051 | +NEUTRON_CONF_DIR = '/etc/neutron' |
1052 | + |
1053 | |
1054 | def neutron_plugins(): |
1055 | from charmhelpers.contrib.openstack import context |
1056 | - return { |
1057 | + release = os_release('nova-common') |
1058 | + plugins = { |
1059 | 'ovs': { |
1060 | 'config': '/etc/neutron/plugins/openvswitch/' |
1061 | 'ovs_neutron_plugin.ini', |
1062 | @@ -66,10 +91,11 @@ |
1063 | 'contexts': [ |
1064 | context.SharedDBContext(user=config('neutron-database-user'), |
1065 | database=config('neutron-database'), |
1066 | - relation_prefix='neutron')], |
1067 | + relation_prefix='neutron', |
1068 | + ssl_dir=NEUTRON_CONF_DIR)], |
1069 | 'services': ['neutron-plugin-openvswitch-agent'], |
1070 | - 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], |
1071 | - ['quantum-plugin-openvswitch-agent']], |
1072 | + 'packages': [[headers_package()] + determine_dkms_package(), |
1073 | + ['neutron-plugin-openvswitch-agent']], |
1074 | 'server_packages': ['neutron-server', |
1075 | 'neutron-plugin-openvswitch'], |
1076 | 'server_services': ['neutron-server'] |
1077 | @@ -81,7 +107,8 @@ |
1078 | 'contexts': [ |
1079 | context.SharedDBContext(user=config('neutron-database-user'), |
1080 | database=config('neutron-database'), |
1081 | - relation_prefix='neutron')], |
1082 | + relation_prefix='neutron', |
1083 | + ssl_dir=NEUTRON_CONF_DIR)], |
1084 | 'services': [], |
1085 | 'packages': [], |
1086 | 'server_packages': ['neutron-server', |
1087 | @@ -89,6 +116,13 @@ |
1088 | 'server_services': ['neutron-server'] |
1089 | } |
1090 | } |
1091 | + # NOTE: patch in ml2 plugin for icehouse onwards |
1092 | + if release >= 'icehouse': |
1093 | + plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' |
1094 | + plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' |
1095 | + plugins['ovs']['server_packages'] = ['neutron-server', |
1096 | + 'neutron-plugin-ml2'] |
1097 | + return plugins |
1098 | |
1099 | |
1100 | def neutron_plugin_attribute(plugin, attr, net_manager=None): |
1101 | |
1102 | === removed file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf' |
1103 | --- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2013-11-26 17:12:54 +0000 |
1104 | +++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 1970-01-01 00:00:00 +0000 |
1105 | @@ -1,11 +0,0 @@ |
1106 | -############################################################################### |
1107 | -# [ WARNING ] |
1108 | -# cinder configuration file maintained by Juju |
1109 | -# local changes may be overwritten. |
1110 | -############################################################################### |
1111 | -{% if auth -%} |
1112 | -[global] |
1113 | - auth_supported = {{ auth }} |
1114 | - keyring = /etc/ceph/$cluster.$name.keyring |
1115 | - mon host = {{ mon_hosts }} |
1116 | -{% endif -%} |
1117 | |
1118 | === removed file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg' |
1119 | --- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2013-11-26 17:12:54 +0000 |
1120 | +++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 1970-01-01 00:00:00 +0000 |
1121 | @@ -1,37 +0,0 @@ |
1122 | -global |
1123 | - log 127.0.0.1 local0 |
1124 | - log 127.0.0.1 local1 notice |
1125 | - maxconn 20000 |
1126 | - user haproxy |
1127 | - group haproxy |
1128 | - spread-checks 0 |
1129 | - |
1130 | -defaults |
1131 | - log global |
1132 | - mode http |
1133 | - option httplog |
1134 | - option dontlognull |
1135 | - retries 3 |
1136 | - timeout queue 1000 |
1137 | - timeout connect 1000 |
1138 | - timeout client 30000 |
1139 | - timeout server 30000 |
1140 | - |
1141 | -listen stats :8888 |
1142 | - mode http |
1143 | - stats enable |
1144 | - stats hide-version |
1145 | - stats realm Haproxy\ Statistics |
1146 | - stats uri / |
1147 | - stats auth admin:password |
1148 | - |
1149 | -{% if units -%} |
1150 | -{% for service, ports in service_ports.iteritems() -%} |
1151 | -listen {{ service }} 0.0.0.0:{{ ports[0] }} |
1152 | - balance roundrobin |
1153 | - option tcplog |
1154 | - {% for unit, address in units.iteritems() -%} |
1155 | - server {{ unit }} {{ address }}:{{ ports[1] }} check |
1156 | - {% endfor %} |
1157 | -{% endfor -%} |
1158 | -{% endif -%} |
1159 | |
1160 | === removed file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend' |
1161 | --- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2013-11-26 17:12:54 +0000 |
1162 | +++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 1970-01-01 00:00:00 +0000 |
1163 | @@ -1,23 +0,0 @@ |
1164 | -{% if endpoints -%} |
1165 | -{% for ext, int in endpoints -%} |
1166 | -Listen {{ ext }} |
1167 | -NameVirtualHost *:{{ ext }} |
1168 | -<VirtualHost *:{{ ext }}> |
1169 | - ServerName {{ private_address }} |
1170 | - SSLEngine on |
1171 | - SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert |
1172 | - SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key |
1173 | - ProxyPass / http://localhost:{{ int }}/ |
1174 | - ProxyPassReverse / http://localhost:{{ int }}/ |
1175 | - ProxyPreserveHost on |
1176 | -</VirtualHost> |
1177 | -<Proxy *> |
1178 | - Order deny,allow |
1179 | - Allow from all |
1180 | -</Proxy> |
1181 | -<Location /> |
1182 | - Order allow,deny |
1183 | - Allow from all |
1184 | -</Location> |
1185 | -{% endfor -%} |
1186 | -{% endif -%} |
1187 | |
1188 | === removed symlink 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf' |
1189 | === target was u'openstack_https_frontend' |
1190 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' |
1191 | --- hooks/charmhelpers/contrib/openstack/utils.py 2013-11-26 17:12:54 +0000 |
1192 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-11-19 17:44:21 +0000 |
1193 | @@ -41,6 +41,7 @@ |
1194 | ('quantal', 'folsom'), |
1195 | ('raring', 'grizzly'), |
1196 | ('saucy', 'havana'), |
1197 | + ('trusty', 'icehouse') |
1198 | ]) |
1199 | |
1200 | |
1201 | @@ -64,6 +65,10 @@ |
1202 | ('1.10.0', 'havana'), |
1203 | ('1.9.1', 'havana'), |
1204 | ('1.9.0', 'havana'), |
1205 | + ('1.13.1', 'icehouse'), |
1206 | + ('1.13.0', 'icehouse'), |
1207 | + ('1.12.0', 'icehouse'), |
1208 | + ('1.11.0', 'icehouse'), |
1209 | ]) |
1210 | |
1211 | DEFAULT_LOOPBACK_SIZE = '5G' |
1212 | @@ -201,7 +206,7 @@ |
1213 | |
1214 | |
1215 | def import_key(keyid): |
1216 | - cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ |
1217 | + cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ |
1218 | "--recv-keys %s" % keyid |
1219 | try: |
1220 | subprocess.check_call(cmd.split(' ')) |
1221 | @@ -260,6 +265,9 @@ |
1222 | 'havana': 'precise-updates/havana', |
1223 | 'havana/updates': 'precise-updates/havana', |
1224 | 'havana/proposed': 'precise-proposed/havana', |
1225 | + 'icehouse': 'precise-updates/icehouse', |
1226 | + 'icehouse/updates': 'precise-updates/icehouse', |
1227 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
1228 | } |
1229 | |
1230 | try: |
1231 | @@ -393,6 +401,8 @@ |
1232 | rtype = 'PTR' |
1233 | elif isinstance(address, basestring): |
1234 | rtype = 'A' |
1235 | + else: |
1236 | + return None |
1237 | |
1238 | answers = dns.resolver.query(address, rtype) |
1239 | if answers: |
1240 | @@ -411,26 +421,30 @@ |
1241 | return ns_query(hostname) |
1242 | |
1243 | |
1244 | -def get_hostname(address): |
1245 | +def get_hostname(address, fqdn=True): |
1246 | """ |
1247 | Resolves hostname for given IP, or returns the input |
1248 | if it is already a hostname. |
1249 | """ |
1250 | - if not is_ip(address): |
1251 | - return address |
1252 | - |
1253 | - try: |
1254 | - import dns.reversename |
1255 | - except ImportError: |
1256 | - apt_install('python-dnspython') |
1257 | - import dns.reversename |
1258 | - |
1259 | - rev = dns.reversename.from_address(address) |
1260 | - result = ns_query(rev) |
1261 | - if not result: |
1262 | - return None |
1263 | - |
1264 | - # strip trailing . |
1265 | - if result.endswith('.'): |
1266 | - return result[:-1] |
1267 | - return result |
1268 | + if is_ip(address): |
1269 | + try: |
1270 | + import dns.reversename |
1271 | + except ImportError: |
1272 | + apt_install('python-dnspython') |
1273 | + import dns.reversename |
1274 | + |
1275 | + rev = dns.reversename.from_address(address) |
1276 | + result = ns_query(rev) |
1277 | + if not result: |
1278 | + return None |
1279 | + else: |
1280 | + result = address |
1281 | + |
1282 | + if fqdn: |
1283 | + # strip trailing . |
1284 | + if result.endswith('.'): |
1285 | + return result[:-1] |
1286 | + else: |
1287 | + return result |
1288 | + else: |
1289 | + return result.split('.')[0] |
1290 | |
1291 | === added directory 'hooks/charmhelpers/contrib/peerstorage' |
1292 | === added file 'hooks/charmhelpers/contrib/peerstorage/__init__.py' |
1293 | --- hooks/charmhelpers/contrib/peerstorage/__init__.py 1970-01-01 00:00:00 +0000 |
1294 | +++ hooks/charmhelpers/contrib/peerstorage/__init__.py 2014-11-19 17:44:21 +0000 |
1295 | @@ -0,0 +1,83 @@ |
1296 | +from charmhelpers.core.hookenv import ( |
1297 | + relation_ids, |
1298 | + relation_get, |
1299 | + local_unit, |
1300 | + relation_set, |
1301 | +) |
1302 | + |
1303 | +""" |
1304 | +This helper provides functions to support use of a peer relation |
1305 | +for basic key/value storage, with the added benefit that all storage |
1306 | +can be replicated across peer units, so this is really useful for |
1307 | +services that issue usernames/passwords to remote services. |
1308 | + |
1309 | +def shared_db_changed() |
1310 | + # Only the lead unit should create passwords |
1311 | + if not is_leader(): |
1312 | + return |
1313 | + username = relation_get('username') |
1314 | + key = '{}.password'.format(username) |
1315 | + # Attempt to retrieve any existing password for this user |
1316 | + password = peer_retrieve(key) |
1317 | + if password is None: |
1318 | + # New user, create password and store |
1319 | + password = pwgen(length=64) |
1320 | + peer_store(key, password) |
1321 | + create_access(username, password) |
1322 | + relation_set(password=password) |
1323 | + |
1324 | + |
1325 | +def cluster_changed() |
1326 | + # Echo any relation data other that *-address |
1327 | + # back onto the peer relation so all units have |
1328 | + # all *.password keys stored on their local relation |
1329 | + # for later retrieval. |
1330 | + peer_echo() |
1331 | + |
1332 | +""" |
1333 | + |
1334 | + |
1335 | +def peer_retrieve(key, relation_name='cluster'): |
1336 | + """ Retrieve a named key from peer relation relation_name """ |
1337 | + cluster_rels = relation_ids(relation_name) |
1338 | + if len(cluster_rels) > 0: |
1339 | + cluster_rid = cluster_rels[0] |
1340 | + return relation_get(attribute=key, rid=cluster_rid, |
1341 | + unit=local_unit()) |
1342 | + else: |
1343 | + raise ValueError('Unable to detect' |
1344 | + 'peer relation {}'.format(relation_name)) |
1345 | + |
1346 | + |
1347 | +def peer_store(key, value, relation_name='cluster'): |
1348 | + """ Store the key/value pair on the named peer relation relation_name """ |
1349 | + cluster_rels = relation_ids(relation_name) |
1350 | + if len(cluster_rels) > 0: |
1351 | + cluster_rid = cluster_rels[0] |
1352 | + relation_set(relation_id=cluster_rid, |
1353 | + relation_settings={key: value}) |
1354 | + else: |
1355 | + raise ValueError('Unable to detect ' |
1356 | + 'peer relation {}'.format(relation_name)) |
1357 | + |
1358 | + |
1359 | +def peer_echo(includes=None): |
1360 | + """Echo filtered attributes back onto the same relation for storage |
1361 | + |
1362 | + Note that this helper must only be called within a peer relation |
1363 | + changed hook |
1364 | + """ |
1365 | + rdata = relation_get() |
1366 | + echo_data = {} |
1367 | + if includes is None: |
1368 | + echo_data = rdata.copy() |
1369 | + for ex in ['private-address', 'public-address']: |
1370 | + if ex in echo_data: |
1371 | + echo_data.pop(ex) |
1372 | + else: |
1373 | + for attribute, value in rdata.iteritems(): |
1374 | + for include in includes: |
1375 | + if include in attribute: |
1376 | + echo_data[attribute] = value |
1377 | + if len(echo_data) > 0: |
1378 | + relation_set(relation_settings=echo_data) |
1379 | |
1380 | === added directory 'hooks/charmhelpers/contrib/python' |
1381 | === added file 'hooks/charmhelpers/contrib/python/__init__.py' |
1382 | === added file 'hooks/charmhelpers/contrib/python/packages.py' |
1383 | --- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000 |
1384 | +++ hooks/charmhelpers/contrib/python/packages.py 2014-11-19 17:44:21 +0000 |
1385 | @@ -0,0 +1,76 @@ |
1386 | +#!/usr/bin/env python |
1387 | +# coding: utf-8 |
1388 | + |
1389 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
1390 | + |
1391 | +from charmhelpers.fetch import apt_install |
1392 | +from charmhelpers.core.hookenv import log |
1393 | + |
1394 | +try: |
1395 | + from pip import main as pip_execute |
1396 | +except ImportError: |
1397 | + apt_install('python-pip') |
1398 | + from pip import main as pip_execute |
1399 | + |
1400 | + |
1401 | +def parse_options(given, available): |
1402 | + """Given a set of options, check if available""" |
1403 | + for key, value in given.items(): |
1404 | + if key in available: |
1405 | + yield "--{0}={1}".format(key, value) |
1406 | + |
1407 | + |
1408 | +def pip_install_requirements(requirements, **options): |
1409 | + """Install a requirements file """ |
1410 | + command = ["install"] |
1411 | + |
1412 | + available_options = ('proxy', 'src', 'log', ) |
1413 | + for option in parse_options(options, available_options): |
1414 | + command.append(option) |
1415 | + |
1416 | + command.append("-r {0}".format(requirements)) |
1417 | + log("Installing from file: {} with options: {}".format(requirements, |
1418 | + command)) |
1419 | + pip_execute(command) |
1420 | + |
1421 | + |
1422 | +def pip_install(package, fatal=False, **options): |
1423 | + """Install a python package""" |
1424 | + command = ["install"] |
1425 | + |
1426 | + available_options = ('proxy', 'src', 'log', "index-url", ) |
1427 | + for option in parse_options(options, available_options): |
1428 | + command.append(option) |
1429 | + |
1430 | + if isinstance(package, list): |
1431 | + command.extend(package) |
1432 | + else: |
1433 | + command.append(package) |
1434 | + |
1435 | + log("Installing {} package with options: {}".format(package, |
1436 | + command)) |
1437 | + pip_execute(command) |
1438 | + |
1439 | + |
1440 | +def pip_uninstall(package, **options): |
1441 | + """Uninstall a python package""" |
1442 | + command = ["uninstall", "-q", "-y"] |
1443 | + |
1444 | + available_options = ('proxy', 'log', ) |
1445 | + for option in parse_options(options, available_options): |
1446 | + command.append(option) |
1447 | + |
1448 | + if isinstance(package, list): |
1449 | + command.extend(package) |
1450 | + else: |
1451 | + command.append(package) |
1452 | + |
1453 | + log("Uninstalling {} package with options: {}".format(package, |
1454 | + command)) |
1455 | + pip_execute(command) |
1456 | + |
1457 | + |
1458 | +def pip_list(): |
1459 | + """Returns the list of current python installed packages |
1460 | + """ |
1461 | + return pip_execute(["list"]) |
1462 | |
1463 | === added file 'hooks/charmhelpers/contrib/python/version.py' |
1464 | --- hooks/charmhelpers/contrib/python/version.py 1970-01-01 00:00:00 +0000 |
1465 | +++ hooks/charmhelpers/contrib/python/version.py 2014-11-19 17:44:21 +0000 |
1466 | @@ -0,0 +1,18 @@ |
1467 | +#!/usr/bin/env python |
1468 | +# coding: utf-8 |
1469 | + |
1470 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
1471 | + |
1472 | +import sys |
1473 | + |
1474 | + |
1475 | +def current_version(): |
1476 | + """Current system python version""" |
1477 | + return sys.version_info |
1478 | + |
1479 | + |
1480 | +def current_version_string(): |
1481 | + """Current system python version as string major.minor.micro""" |
1482 | + return "{0}.{1}.{2}".format(sys.version_info.major, |
1483 | + sys.version_info.minor, |
1484 | + sys.version_info.micro) |
1485 | |
1486 | === added file 'hooks/charmhelpers/contrib/ssl/service.py' |
1487 | --- hooks/charmhelpers/contrib/ssl/service.py 1970-01-01 00:00:00 +0000 |
1488 | +++ hooks/charmhelpers/contrib/ssl/service.py 2014-11-19 17:44:21 +0000 |
1489 | @@ -0,0 +1,267 @@ |
1490 | +import logging |
1491 | +import os |
1492 | +from os.path import join as path_join |
1493 | +from os.path import exists |
1494 | +import subprocess |
1495 | + |
1496 | + |
1497 | +log = logging.getLogger("service_ca") |
1498 | + |
1499 | +logging.basicConfig(level=logging.DEBUG) |
1500 | + |
1501 | +STD_CERT = "standard" |
1502 | + |
1503 | +# Mysql server is fairly picky about cert creation |
1504 | +# and types, spec its creation separately for now. |
1505 | +MYSQL_CERT = "mysql" |
1506 | + |
1507 | + |
1508 | +class ServiceCA(object): |
1509 | + |
1510 | + default_expiry = str(365 * 2) |
1511 | + default_ca_expiry = str(365 * 6) |
1512 | + |
1513 | + def __init__(self, name, ca_dir, cert_type=STD_CERT): |
1514 | + self.name = name |
1515 | + self.ca_dir = ca_dir |
1516 | + self.cert_type = cert_type |
1517 | + |
1518 | + ############### |
1519 | + # Hook Helper API |
1520 | + @staticmethod |
1521 | + def get_ca(type=STD_CERT): |
1522 | + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] |
1523 | + ca_path = os.path.join(os.environ['CHARM_DIR'], 'ca') |
1524 | + ca = ServiceCA(service_name, ca_path, type) |
1525 | + ca.init() |
1526 | + return ca |
1527 | + |
1528 | + @classmethod |
1529 | + def get_service_cert(cls, type=STD_CERT): |
1530 | + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] |
1531 | + ca = cls.get_ca() |
1532 | + crt, key = ca.get_or_create_cert(service_name) |
1533 | + return crt, key, ca.get_ca_bundle() |
1534 | + |
1535 | + ############### |
1536 | + |
1537 | + def init(self): |
1538 | + log.debug("initializing service ca") |
1539 | + if not exists(self.ca_dir): |
1540 | + self._init_ca_dir(self.ca_dir) |
1541 | + self._init_ca() |
1542 | + |
1543 | + @property |
1544 | + def ca_key(self): |
1545 | + return path_join(self.ca_dir, 'private', 'cacert.key') |
1546 | + |
1547 | + @property |
1548 | + def ca_cert(self): |
1549 | + return path_join(self.ca_dir, 'cacert.pem') |
1550 | + |
1551 | + @property |
1552 | + def ca_conf(self): |
1553 | + return path_join(self.ca_dir, 'ca.cnf') |
1554 | + |
1555 | + @property |
1556 | + def signing_conf(self): |
1557 | + return path_join(self.ca_dir, 'signing.cnf') |
1558 | + |
1559 | + def _init_ca_dir(self, ca_dir): |
1560 | + os.mkdir(ca_dir) |
1561 | + for i in ['certs', 'crl', 'newcerts', 'private']: |
1562 | + sd = path_join(ca_dir, i) |
1563 | + if not exists(sd): |
1564 | + os.mkdir(sd) |
1565 | + |
1566 | + if not exists(path_join(ca_dir, 'serial')): |
1567 | + with open(path_join(ca_dir, 'serial'), 'wb') as fh: |
1568 | + fh.write('02\n') |
1569 | + |
1570 | + if not exists(path_join(ca_dir, 'index.txt')): |
1571 | + with open(path_join(ca_dir, 'index.txt'), 'wb') as fh: |
1572 | + fh.write('') |
1573 | + |
1574 | + def _init_ca(self): |
1575 | + """Generate the root ca's cert and key. |
1576 | + """ |
1577 | + if not exists(path_join(self.ca_dir, 'ca.cnf')): |
1578 | + with open(path_join(self.ca_dir, 'ca.cnf'), 'wb') as fh: |
1579 | + fh.write( |
1580 | + CA_CONF_TEMPLATE % (self.get_conf_variables())) |
1581 | + |
1582 | + if not exists(path_join(self.ca_dir, 'signing.cnf')): |
1583 | + with open(path_join(self.ca_dir, 'signing.cnf'), 'wb') as fh: |
1584 | + fh.write( |
1585 | + SIGNING_CONF_TEMPLATE % (self.get_conf_variables())) |
1586 | + |
1587 | + if exists(self.ca_cert) or exists(self.ca_key): |
1588 | + raise RuntimeError("Initialized called when CA already exists") |
1589 | + cmd = ['openssl', 'req', '-config', self.ca_conf, |
1590 | + '-x509', '-nodes', '-newkey', 'rsa', |
1591 | + '-days', self.default_ca_expiry, |
1592 | + '-keyout', self.ca_key, '-out', self.ca_cert, |
1593 | + '-outform', 'PEM'] |
1594 | + output = subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
1595 | + log.debug("CA Init:\n %s", output) |
1596 | + |
1597 | + def get_conf_variables(self): |
1598 | + return dict( |
1599 | + org_name="juju", |
1600 | + org_unit_name="%s service" % self.name, |
1601 | + common_name=self.name, |
1602 | + ca_dir=self.ca_dir) |
1603 | + |
1604 | + def get_or_create_cert(self, common_name): |
1605 | + if common_name in self: |
1606 | + return self.get_certificate(common_name) |
1607 | + return self.create_certificate(common_name) |
1608 | + |
1609 | + def create_certificate(self, common_name): |
1610 | + if common_name in self: |
1611 | + return self.get_certificate(common_name) |
1612 | + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) |
1613 | + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) |
1614 | + csr_p = path_join(self.ca_dir, "certs", "%s.csr" % common_name) |
1615 | + self._create_certificate(common_name, key_p, csr_p, crt_p) |
1616 | + return self.get_certificate(common_name) |
1617 | + |
1618 | + def get_certificate(self, common_name): |
1619 | + if not common_name in self: |
1620 | + raise ValueError("No certificate for %s" % common_name) |
1621 | + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) |
1622 | + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) |
1623 | + with open(crt_p) as fh: |
1624 | + crt = fh.read() |
1625 | + with open(key_p) as fh: |
1626 | + key = fh.read() |
1627 | + return crt, key |
1628 | + |
1629 | + def __contains__(self, common_name): |
1630 | + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) |
1631 | + return exists(crt_p) |
1632 | + |
1633 | + def _create_certificate(self, common_name, key_p, csr_p, crt_p): |
1634 | + template_vars = self.get_conf_variables() |
1635 | + template_vars['common_name'] = common_name |
1636 | + subj = '/O=%(org_name)s/OU=%(org_unit_name)s/CN=%(common_name)s' % ( |
1637 | + template_vars) |
1638 | + |
1639 | + log.debug("CA Create Cert %s", common_name) |
1640 | + cmd = ['openssl', 'req', '-sha1', '-newkey', 'rsa:2048', |
1641 | + '-nodes', '-days', self.default_expiry, |
1642 | + '-keyout', key_p, '-out', csr_p, '-subj', subj] |
1643 | + subprocess.check_call(cmd) |
1644 | + cmd = ['openssl', 'rsa', '-in', key_p, '-out', key_p] |
1645 | + subprocess.check_call(cmd) |
1646 | + |
1647 | + log.debug("CA Sign Cert %s", common_name) |
1648 | + if self.cert_type == MYSQL_CERT: |
1649 | + cmd = ['openssl', 'x509', '-req', |
1650 | + '-in', csr_p, '-days', self.default_expiry, |
1651 | + '-CA', self.ca_cert, '-CAkey', self.ca_key, |
1652 | + '-set_serial', '01', '-out', crt_p] |
1653 | + else: |
1654 | + cmd = ['openssl', 'ca', '-config', self.signing_conf, |
1655 | + '-extensions', 'req_extensions', |
1656 | + '-days', self.default_expiry, '-notext', |
1657 | + '-in', csr_p, '-out', crt_p, '-subj', subj, '-batch'] |
1658 | + log.debug("running %s", " ".join(cmd)) |
1659 | + subprocess.check_call(cmd) |
1660 | + |
1661 | + def get_ca_bundle(self): |
1662 | + with open(self.ca_cert) as fh: |
1663 | + return fh.read() |
1664 | + |
1665 | + |
1666 | +CA_CONF_TEMPLATE = """ |
1667 | +[ ca ] |
1668 | +default_ca = CA_default |
1669 | + |
1670 | +[ CA_default ] |
1671 | +dir = %(ca_dir)s |
1672 | +policy = policy_match |
1673 | +database = $dir/index.txt |
1674 | +serial = $dir/serial |
1675 | +certs = $dir/certs |
1676 | +crl_dir = $dir/crl |
1677 | +new_certs_dir = $dir/newcerts |
1678 | +certificate = $dir/cacert.pem |
1679 | +private_key = $dir/private/cacert.key |
1680 | +RANDFILE = $dir/private/.rand |
1681 | +default_md = default |
1682 | + |
1683 | +[ req ] |
1684 | +default_bits = 1024 |
1685 | +default_md = sha1 |
1686 | + |
1687 | +prompt = no |
1688 | +distinguished_name = ca_distinguished_name |
1689 | + |
1690 | +x509_extensions = ca_extensions |
1691 | + |
1692 | +[ ca_distinguished_name ] |
1693 | +organizationName = %(org_name)s |
1694 | +organizationalUnitName = %(org_unit_name)s Certificate Authority |
1695 | + |
1696 | + |
1697 | +[ policy_match ] |
1698 | +countryName = optional |
1699 | +stateOrProvinceName = optional |
1700 | +organizationName = match |
1701 | +organizationalUnitName = optional |
1702 | +commonName = supplied |
1703 | + |
1704 | +[ ca_extensions ] |
1705 | +basicConstraints = critical,CA:true |
1706 | +subjectKeyIdentifier = hash |
1707 | +authorityKeyIdentifier = keyid:always, issuer |
1708 | +keyUsage = cRLSign, keyCertSign |
1709 | +""" |
1710 | + |
1711 | + |
1712 | +SIGNING_CONF_TEMPLATE = """ |
1713 | +[ ca ] |
1714 | +default_ca = CA_default |
1715 | + |
1716 | +[ CA_default ] |
1717 | +dir = %(ca_dir)s |
1718 | +policy = policy_match |
1719 | +database = $dir/index.txt |
1720 | +serial = $dir/serial |
1721 | +certs = $dir/certs |
1722 | +crl_dir = $dir/crl |
1723 | +new_certs_dir = $dir/newcerts |
1724 | +certificate = $dir/cacert.pem |
1725 | +private_key = $dir/private/cacert.key |
1726 | +RANDFILE = $dir/private/.rand |
1727 | +default_md = default |
1728 | + |
1729 | +[ req ] |
1730 | +default_bits = 1024 |
1731 | +default_md = sha1 |
1732 | + |
1733 | +prompt = no |
1734 | +distinguished_name = req_distinguished_name |
1735 | + |
1736 | +x509_extensions = req_extensions |
1737 | + |
1738 | +[ req_distinguished_name ] |
1739 | +organizationName = %(org_name)s |
1740 | +organizationalUnitName = %(org_unit_name)s machine resources |
1741 | +commonName = %(common_name)s |
1742 | + |
1743 | +[ policy_match ] |
1744 | +countryName = optional |
1745 | +stateOrProvinceName = optional |
1746 | +organizationName = match |
1747 | +organizationalUnitName = optional |
1748 | +commonName = supplied |
1749 | + |
1750 | +[ req_extensions ] |
1751 | +basicConstraints = CA:false |
1752 | +subjectKeyIdentifier = hash |
1753 | +authorityKeyIdentifier = keyid:always, issuer |
1754 | +keyUsage = digitalSignature, keyEncipherment, keyAgreement |
1755 | +extendedKeyUsage = serverAuth, clientAuth |
1756 | +""" |
1757 | |
1758 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
1759 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2013-11-26 17:12:54 +0000 |
1760 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-11-19 17:44:21 +0000 |
1761 | @@ -49,6 +49,9 @@ |
1762 | auth supported = {auth} |
1763 | keyring = {keyring} |
1764 | mon host = {mon_hosts} |
1765 | + log to syslog = {use_syslog} |
1766 | + err to syslog = {use_syslog} |
1767 | + clog to syslog = {use_syslog} |
1768 | """ |
1769 | |
1770 | |
1771 | @@ -194,7 +197,7 @@ |
1772 | return hosts |
1773 | |
1774 | |
1775 | -def configure(service, key, auth): |
1776 | +def configure(service, key, auth, use_syslog): |
1777 | ''' Perform basic configuration of Ceph ''' |
1778 | create_keyring(service, key) |
1779 | create_key_file(service, key) |
1780 | @@ -202,7 +205,8 @@ |
1781 | with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
1782 | ceph_conf.write(CEPH_CONF.format(auth=auth, |
1783 | keyring=_keyring_path(service), |
1784 | - mon_hosts=",".join(map(str, hosts)))) |
1785 | + mon_hosts=",".join(map(str, hosts)), |
1786 | + use_syslog=use_syslog)) |
1787 | modprobe('rbd') |
1788 | |
1789 | |
1790 | |
1791 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
1792 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2013-11-26 17:12:54 +0000 |
1793 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-11-19 17:44:21 +0000 |
1794 | @@ -2,7 +2,9 @@ |
1795 | from stat import S_ISBLK |
1796 | |
1797 | from subprocess import ( |
1798 | - check_call |
1799 | + check_call, |
1800 | + check_output, |
1801 | + call |
1802 | ) |
1803 | |
1804 | |
1805 | @@ -22,4 +24,12 @@ |
1806 | |
1807 | :param block_device: str: Full path of block device to clean. |
1808 | ''' |
1809 | - check_call(['sgdisk', '--zap-all', block_device]) |
1810 | + # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
1811 | + call(['sgdisk', '--zap-all', '--mbrtogpt', |
1812 | + '--clear', block_device]) |
1813 | + dev_end = check_output(['blockdev', '--getsz', block_device]) |
1814 | + gpt_end = int(dev_end.split()[0]) - 100 |
1815 | + check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), |
1816 | + 'bs=1M', 'count=1']) |
1817 | + check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), |
1818 | + 'bs=512', 'count=100', 'seek=%s'%(gpt_end)]) |
1819 | |
1820 | === modified file 'hooks/charmhelpers/contrib/templating/contexts.py' |
1821 | --- hooks/charmhelpers/contrib/templating/contexts.py 2013-11-26 17:12:54 +0000 |
1822 | +++ hooks/charmhelpers/contrib/templating/contexts.py 2014-11-19 17:44:21 +0000 |
1823 | @@ -12,6 +12,43 @@ |
1824 | charm_dir = os.environ.get('CHARM_DIR', '') |
1825 | |
1826 | |
1827 | +def dict_keys_without_hyphens(a_dict): |
1828 | + """Return the a new dict with underscores instead of hyphens in keys.""" |
1829 | + return dict( |
1830 | + (key.replace('-', '_'), val) for key, val in a_dict.items()) |
1831 | + |
1832 | + |
1833 | +def update_relations(context, namespace_separator=':'): |
1834 | + """Update the context with the relation data.""" |
1835 | + # Add any relation data prefixed with the relation type. |
1836 | + relation_type = charmhelpers.core.hookenv.relation_type() |
1837 | + relations = [] |
1838 | + context['current_relation'] = {} |
1839 | + if relation_type is not None: |
1840 | + relation_data = charmhelpers.core.hookenv.relation_get() |
1841 | + context['current_relation'] = relation_data |
1842 | + # Deprecated: the following use of relation data as keys |
1843 | + # directly in the context will be removed. |
1844 | + relation_data = dict( |
1845 | + ("{relation_type}{namespace_separator}{key}".format( |
1846 | + relation_type=relation_type, |
1847 | + key=key, |
1848 | + namespace_separator=namespace_separator), val) |
1849 | + for key, val in relation_data.items()) |
1850 | + relation_data = dict_keys_without_hyphens(relation_data) |
1851 | + context.update(relation_data) |
1852 | + relations = charmhelpers.core.hookenv.relations_of_type(relation_type) |
1853 | + relations = [dict_keys_without_hyphens(rel) for rel in relations] |
1854 | + |
1855 | + if 'relations_deprecated' not in context: |
1856 | + context['relations_deprecated'] = {} |
1857 | + if relation_type is not None: |
1858 | + relation_type = relation_type.replace('-', '_') |
1859 | + context['relations_deprecated'][relation_type] = relations |
1860 | + |
1861 | + context['relations'] = charmhelpers.core.hookenv.relations() |
1862 | + |
1863 | + |
1864 | def juju_state_to_yaml(yaml_path, namespace_separator=':', |
1865 | allow_hyphens_in_keys=True): |
1866 | """Update the juju config and state in a yaml file. |
1867 | @@ -36,18 +73,10 @@ |
1868 | # file resources etc. |
1869 | config['charm_dir'] = charm_dir |
1870 | config['local_unit'] = charmhelpers.core.hookenv.local_unit() |
1871 | - |
1872 | - # Add any relation data prefixed with the relation type. |
1873 | - relation_type = charmhelpers.core.hookenv.relation_type() |
1874 | - if relation_type is not None: |
1875 | - relation_data = charmhelpers.core.hookenv.relation_get() |
1876 | - relation_data = dict( |
1877 | - ("{relation_type}{namespace_separator}{key}".format( |
1878 | - relation_type=relation_type.replace('-', '_'), |
1879 | - key=key, |
1880 | - namespace_separator=namespace_separator), val) |
1881 | - for key, val in relation_data.items()) |
1882 | - config.update(relation_data) |
1883 | + config['unit_private_address'] = charmhelpers.core.hookenv.unit_private_ip() |
1884 | + config['unit_public_address'] = charmhelpers.core.hookenv.unit_get( |
1885 | + 'public-address' |
1886 | + ) |
1887 | |
1888 | # Don't use non-standard tags for unicode which will not |
1889 | # work when salt uses yaml.load_safe. |
1890 | @@ -66,8 +95,10 @@ |
1891 | existing_vars = {} |
1892 | |
1893 | if not allow_hyphens_in_keys: |
1894 | - config = dict( |
1895 | - (key.replace('-', '_'), val) for key, val in config.items()) |
1896 | + config = dict_keys_without_hyphens(config) |
1897 | existing_vars.update(config) |
1898 | + |
1899 | + update_relations(existing_vars, namespace_separator) |
1900 | + |
1901 | with open(yaml_path, "w+") as fp: |
1902 | - fp.write(yaml.dump(existing_vars)) |
1903 | + fp.write(yaml.dump(existing_vars, default_flow_style=False)) |
1904 | |
1905 | === added directory 'hooks/charmhelpers/contrib/unison' |
1906 | === added file 'hooks/charmhelpers/contrib/unison/__init__.py' |
1907 | --- hooks/charmhelpers/contrib/unison/__init__.py 1970-01-01 00:00:00 +0000 |
1908 | +++ hooks/charmhelpers/contrib/unison/__init__.py 2014-11-19 17:44:21 +0000 |
1909 | @@ -0,0 +1,257 @@ |
1910 | +# Easy file synchronization among peer units using ssh + unison. |
1911 | +# |
1912 | +# From *both* peer relation -joined and -changed, add a call to |
1913 | +# ssh_authorized_peers() describing the peer relation and the desired |
1914 | +# user + group. After all peer relations have settled, all hosts should |
1915 | +# be able to connect to on another via key auth'd ssh as the specified user. |
1916 | +# |
1917 | +# Other hooks are then free to synchronize files and directories using |
1918 | +# sync_to_peers(). |
1919 | +# |
1920 | +# For a peer relation named 'cluster', for example: |
1921 | +# |
1922 | +# cluster-relation-joined: |
1923 | +# ... |
1924 | +# ssh_authorized_peers(peer_interface='cluster', |
1925 | +# user='juju_ssh', group='juju_ssh', |
1926 | +# ensure_user=True) |
1927 | +# ... |
1928 | +# |
1929 | +# cluster-relation-changed: |
1930 | +# ... |
1931 | +# ssh_authorized_peers(peer_interface='cluster', |
1932 | +# user='juju_ssh', group='juju_ssh', |
1933 | +# ensure_user=True) |
1934 | +# ... |
1935 | +# |
1936 | +# Hooks are now free to sync files as easily as: |
1937 | +# |
1938 | +# files = ['/etc/fstab', '/etc/apt.conf.d/'] |
1939 | +# sync_to_peers(peer_interface='cluster', |
1940 | +# user='juju_ssh, paths=[files]) |
1941 | +# |
1942 | +# It is assumed the charm itself has setup permissions on each unit |
1943 | +# such that 'juju_ssh' has read + write permissions. Also assumed |
1944 | +# that the calling charm takes care of leader delegation. |
1945 | +# |
1946 | +# Additionally files can be synchronized only to an specific unit: |
1947 | +# sync_to_peer(slave_address, user='juju_ssh', |
1948 | +# paths=[files], verbose=False) |
1949 | + |
1950 | +import os |
1951 | +import pwd |
1952 | + |
1953 | +from copy import copy |
1954 | +from subprocess import check_call, check_output |
1955 | + |
1956 | +from charmhelpers.core.host import ( |
1957 | + adduser, |
1958 | + add_user_to_group, |
1959 | +) |
1960 | + |
1961 | +from charmhelpers.core.hookenv import ( |
1962 | + log, |
1963 | + hook_name, |
1964 | + relation_ids, |
1965 | + related_units, |
1966 | + relation_set, |
1967 | + relation_get, |
1968 | + unit_private_ip, |
1969 | + ERROR, |
1970 | +) |
1971 | + |
1972 | +BASE_CMD = ['unison', '-auto', '-batch=true', '-confirmbigdel=false', |
1973 | + '-fastcheck=true', '-group=false', '-owner=false', |
1974 | + '-prefer=newer', '-times=true'] |
1975 | + |
1976 | + |
1977 | +def get_homedir(user): |
1978 | + try: |
1979 | + user = pwd.getpwnam(user) |
1980 | + return user.pw_dir |
1981 | + except KeyError: |
1982 | + log('Could not get homedir for user %s: user exists?', ERROR) |
1983 | + raise Exception |
1984 | + |
1985 | + |
1986 | +def create_private_key(user, priv_key_path): |
1987 | + if not os.path.isfile(priv_key_path): |
1988 | + log('Generating new SSH key for user %s.' % user) |
1989 | + cmd = ['ssh-keygen', '-q', '-N', '', '-t', 'rsa', '-b', '2048', |
1990 | + '-f', priv_key_path] |
1991 | + check_call(cmd) |
1992 | + else: |
1993 | + log('SSH key already exists at %s.' % priv_key_path) |
1994 | + check_call(['chown', user, priv_key_path]) |
1995 | + check_call(['chmod', '0600', priv_key_path]) |
1996 | + |
1997 | + |
1998 | +def create_public_key(user, priv_key_path, pub_key_path): |
1999 | + if not os.path.isfile(pub_key_path): |
2000 | + log('Generating missing ssh public key @ %s.' % pub_key_path) |
2001 | + cmd = ['ssh-keygen', '-y', '-f', priv_key_path] |
2002 | + p = check_output(cmd).strip() |
2003 | + with open(pub_key_path, 'wb') as out: |
2004 | + out.write(p) |
2005 | + check_call(['chown', user, pub_key_path]) |
2006 | + |
2007 | + |
2008 | +def get_keypair(user): |
2009 | + home_dir = get_homedir(user) |
2010 | + ssh_dir = os.path.join(home_dir, '.ssh') |
2011 | + priv_key = os.path.join(ssh_dir, 'id_rsa') |
2012 | + pub_key = '%s.pub' % priv_key |
2013 | + |
2014 | + if not os.path.isdir(ssh_dir): |
2015 | + os.mkdir(ssh_dir) |
2016 | + check_call(['chown', '-R', user, ssh_dir]) |
2017 | + |
2018 | + create_private_key(user, priv_key) |
2019 | + create_public_key(user, priv_key, pub_key) |
2020 | + |
2021 | + with open(priv_key, 'r') as p: |
2022 | + _priv = p.read().strip() |
2023 | + |
2024 | + with open(pub_key, 'r') as p: |
2025 | + _pub = p.read().strip() |
2026 | + |
2027 | + return (_priv, _pub) |
2028 | + |
2029 | + |
2030 | +def write_authorized_keys(user, keys): |
2031 | + home_dir = get_homedir(user) |
2032 | + ssh_dir = os.path.join(home_dir, '.ssh') |
2033 | + auth_keys = os.path.join(ssh_dir, 'authorized_keys') |
2034 | + log('Syncing authorized_keys @ %s.' % auth_keys) |
2035 | + with open(auth_keys, 'wb') as out: |
2036 | + for k in keys: |
2037 | + out.write('%s\n' % k) |
2038 | + |
2039 | + |
2040 | +def write_known_hosts(user, hosts): |
2041 | + home_dir = get_homedir(user) |
2042 | + ssh_dir = os.path.join(home_dir, '.ssh') |
2043 | + known_hosts = os.path.join(ssh_dir, 'known_hosts') |
2044 | + khosts = [] |
2045 | + for host in hosts: |
2046 | + cmd = ['ssh-keyscan', '-H', '-t', 'rsa', host] |
2047 | + remote_key = check_output(cmd).strip() |
2048 | + khosts.append(remote_key) |
2049 | + log('Syncing known_hosts @ %s.' % known_hosts) |
2050 | + with open(known_hosts, 'wb') as out: |
2051 | + for host in khosts: |
2052 | + out.write('%s\n' % host) |
2053 | + |
2054 | + |
2055 | +def ensure_user(user, group=None): |
2056 | + adduser(user) |
2057 | + if group: |
2058 | + add_user_to_group(user, group) |
2059 | + |
2060 | + |
2061 | +def ssh_authorized_peers(peer_interface, user, group=None, |
2062 | + ensure_local_user=False): |
2063 | + """ |
2064 | + Main setup function, should be called from both peer -changed and -joined |
2065 | + hooks with the same parameters. |
2066 | + """ |
2067 | + if ensure_local_user: |
2068 | + ensure_user(user, group) |
2069 | + priv_key, pub_key = get_keypair(user) |
2070 | + hook = hook_name() |
2071 | + if hook == '%s-relation-joined' % peer_interface: |
2072 | + relation_set(ssh_pub_key=pub_key) |
2073 | + elif hook == '%s-relation-changed' % peer_interface: |
2074 | + hosts = [] |
2075 | + keys = [] |
2076 | + |
2077 | + for r_id in relation_ids(peer_interface): |
2078 | + for unit in related_units(r_id): |
2079 | + ssh_pub_key = relation_get('ssh_pub_key', |
2080 | + rid=r_id, |
2081 | + unit=unit) |
2082 | + priv_addr = relation_get('private-address', |
2083 | + rid=r_id, |
2084 | + unit=unit) |
2085 | + if ssh_pub_key: |
2086 | + keys.append(ssh_pub_key) |
2087 | + hosts.append(priv_addr) |
2088 | + else: |
2089 | + log('ssh_authorized_peers(): ssh_pub_key ' |
2090 | + 'missing for unit %s, skipping.' % unit) |
2091 | + write_authorized_keys(user, keys) |
2092 | + write_known_hosts(user, hosts) |
2093 | + authed_hosts = ':'.join(hosts) |
2094 | + relation_set(ssh_authorized_hosts=authed_hosts) |
2095 | + |
2096 | + |
2097 | +def _run_as_user(user): |
2098 | + try: |
2099 | + user = pwd.getpwnam(user) |
2100 | + except KeyError: |
2101 | + log('Invalid user: %s' % user) |
2102 | + raise Exception |
2103 | + uid, gid = user.pw_uid, user.pw_gid |
2104 | + os.environ['HOME'] = user.pw_dir |
2105 | + |
2106 | + def _inner(): |
2107 | + os.setgid(gid) |
2108 | + os.setuid(uid) |
2109 | + return _inner |
2110 | + |
2111 | + |
2112 | +def run_as_user(user, cmd): |
2113 | + return check_output(cmd, preexec_fn=_run_as_user(user), cwd='/') |
2114 | + |
2115 | + |
2116 | +def collect_authed_hosts(peer_interface): |
2117 | + '''Iterate through the units on peer interface to find all that |
2118 | + have the calling host in its authorized hosts list''' |
2119 | + hosts = [] |
2120 | + for r_id in (relation_ids(peer_interface) or []): |
2121 | + for unit in related_units(r_id): |
2122 | + private_addr = relation_get('private-address', |
2123 | + rid=r_id, unit=unit) |
2124 | + authed_hosts = relation_get('ssh_authorized_hosts', |
2125 | + rid=r_id, unit=unit) |
2126 | + |
2127 | + if not authed_hosts: |
2128 | + log('Peer %s has not authorized *any* hosts yet, skipping.') |
2129 | + continue |
2130 | + |
2131 | + if unit_private_ip() in authed_hosts.split(':'): |
2132 | + hosts.append(private_addr) |
2133 | + else: |
2134 | + log('Peer %s has not authorized *this* host yet, skipping.') |
2135 | + |
2136 | + return hosts |
2137 | + |
2138 | + |
2139 | +def sync_path_to_host(path, host, user, verbose=False): |
2140 | + cmd = copy(BASE_CMD) |
2141 | + if not verbose: |
2142 | + cmd.append('-silent') |
2143 | + |
2144 | + # removing trailing slash from directory paths, unison |
2145 | + # doesn't like these. |
2146 | + if path.endswith('/'): |
2147 | + path = path[:(len(path) - 1)] |
2148 | + |
2149 | + cmd = cmd + [path, 'ssh://%s@%s/%s' % (user, host, path)] |
2150 | + |
2151 | + try: |
2152 | + log('Syncing local path %s to %s@%s:%s' % (path, user, host, path)) |
2153 | + run_as_user(user, cmd) |
2154 | + except: |
2155 | + log('Error syncing remote files') |
2156 | + |
2157 | + |
2158 | +def sync_to_peer(host, user, paths=[], verbose=False): |
2159 | + '''Sync paths to an specific host''' |
2160 | + [sync_path_to_host(p, host, user, verbose) for p in paths] |
2161 | + |
2162 | + |
2163 | +def sync_to_peers(peer_interface, user, paths=[], verbose=False): |
2164 | + '''Sync all hosts to an specific path''' |
2165 | + for host in collect_authed_hosts(peer_interface): |
2166 | + sync_to_peer(host, user, paths, verbose) |
2167 | |
2168 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
2169 | --- hooks/charmhelpers/core/hookenv.py 2013-11-26 17:12:54 +0000 |
2170 | +++ hooks/charmhelpers/core/hookenv.py 2014-11-19 17:44:21 +0000 |
2171 | @@ -8,6 +8,7 @@ |
2172 | import json |
2173 | import yaml |
2174 | import subprocess |
2175 | +import sys |
2176 | import UserDict |
2177 | from subprocess import CalledProcessError |
2178 | |
2179 | @@ -149,6 +150,11 @@ |
2180 | return local_unit().split('/')[0] |
2181 | |
2182 | |
2183 | +def hook_name(): |
2184 | + """The name of the currently executing hook""" |
2185 | + return os.path.basename(sys.argv[0]) |
2186 | + |
2187 | + |
2188 | @cached |
2189 | def config(scope=None): |
2190 | """Juju charm configuration""" |
2191 | |
2192 | === modified file 'hooks/charmhelpers/core/host.py' |
2193 | --- hooks/charmhelpers/core/host.py 2013-11-26 17:12:54 +0000 |
2194 | +++ hooks/charmhelpers/core/host.py 2014-11-19 17:44:21 +0000 |
2195 | @@ -194,7 +194,7 @@ |
2196 | return None |
2197 | |
2198 | |
2199 | -def restart_on_change(restart_map): |
2200 | +def restart_on_change(restart_map, stopstart=False): |
2201 | """Restart services based on configuration files changing |
2202 | |
2203 | This function is used a decorator, for example |
2204 | @@ -219,8 +219,14 @@ |
2205 | for path in restart_map: |
2206 | if checksums[path] != file_hash(path): |
2207 | restarts += restart_map[path] |
2208 | - for service_name in list(OrderedDict.fromkeys(restarts)): |
2209 | - service('restart', service_name) |
2210 | + services_list = list(OrderedDict.fromkeys(restarts)) |
2211 | + if not stopstart: |
2212 | + for service_name in services_list: |
2213 | + service('restart', service_name) |
2214 | + else: |
2215 | + for action in ['stop', 'start']: |
2216 | + for service_name in services_list: |
2217 | + service(action, service_name) |
2218 | return wrapped_f |
2219 | return wrap |
2220 | |
2221 | @@ -279,3 +285,13 @@ |
2222 | if 'mtu' in words: |
2223 | mtu = words[words.index("mtu") + 1] |
2224 | return mtu |
2225 | + |
2226 | + |
2227 | +def get_nic_hwaddr(nic): |
2228 | + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
2229 | + ip_output = subprocess.check_output(cmd) |
2230 | + hwaddr = "" |
2231 | + words = ip_output.split() |
2232 | + if 'link/ether' in words: |
2233 | + hwaddr = words[words.index('link/ether') + 1] |
2234 | + return hwaddr |
2235 | |
2236 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
2237 | --- hooks/charmhelpers/fetch/__init__.py 2013-11-26 17:12:54 +0000 |
2238 | +++ hooks/charmhelpers/fetch/__init__.py 2014-11-19 17:44:21 +0000 |
2239 | @@ -44,8 +44,16 @@ |
2240 | 'precise-havana/updates': 'precise-updates/havana', |
2241 | 'precise-updates/havana': 'precise-updates/havana', |
2242 | 'havana/proposed': 'precise-proposed/havana', |
2243 | - 'precies-havana/proposed': 'precise-proposed/havana', |
2244 | + 'precise-havana/proposed': 'precise-proposed/havana', |
2245 | 'precise-proposed/havana': 'precise-proposed/havana', |
2246 | + # Icehouse |
2247 | + 'icehouse': 'precise-updates/icehouse', |
2248 | + 'precise-icehouse': 'precise-updates/icehouse', |
2249 | + 'precise-icehouse/updates': 'precise-updates/icehouse', |
2250 | + 'precise-updates/icehouse': 'precise-updates/icehouse', |
2251 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
2252 | + 'precise-icehouse/proposed': 'precise-proposed/icehouse', |
2253 | + 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
2254 | } |
2255 | |
2256 | |
2257 | @@ -89,6 +97,29 @@ |
2258 | subprocess.call(cmd, env=env) |
2259 | |
2260 | |
2261 | +def apt_upgrade(options=None, fatal=False, dist=False): |
2262 | + """Upgrade all packages""" |
2263 | + if options is None: |
2264 | + options = ['--option=Dpkg::Options::=--force-confold'] |
2265 | + |
2266 | + cmd = ['apt-get', '--assume-yes'] |
2267 | + cmd.extend(options) |
2268 | + if dist: |
2269 | + cmd.append('dist-upgrade') |
2270 | + else: |
2271 | + cmd.append('upgrade') |
2272 | + log("Upgrading with options: {}".format(options)) |
2273 | + |
2274 | + env = os.environ.copy() |
2275 | + if 'DEBIAN_FRONTEND' not in env: |
2276 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
2277 | + |
2278 | + if fatal: |
2279 | + subprocess.check_call(cmd, env=env) |
2280 | + else: |
2281 | + subprocess.call(cmd, env=env) |
2282 | + |
2283 | + |
2284 | def apt_update(fatal=False): |
2285 | """Update local apt cache""" |
2286 | cmd = ['apt-get', 'update'] |
2287 | @@ -127,8 +158,12 @@ |
2288 | |
2289 | |
2290 | def add_source(source, key=None): |
2291 | + if source is None: |
2292 | + log('Source is not present. Skipping') |
2293 | + return |
2294 | + |
2295 | if (source.startswith('ppa:') or |
2296 | - source.startswith('http:') or |
2297 | + source.startswith('http') or |
2298 | source.startswith('deb ') or |
2299 | source.startswith('cloud-archive:')): |
2300 | subprocess.check_call(['add-apt-repository', '--yes', source]) |
2301 | @@ -148,7 +183,9 @@ |
2302 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
2303 | apt.write(PROPOSED_POCKET.format(release)) |
2304 | if key: |
2305 | - subprocess.check_call(['apt-key', 'import', key]) |
2306 | + subprocess.check_call(['apt-key', 'adv', '--keyserver', |
2307 | + 'hkp://keyserver.ubuntu.com:80', '--recv', |
2308 | + key]) |
2309 | |
2310 | |
2311 | class SourceConfigError(Exception): |
2312 | |
2313 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' |
2314 | --- hooks/charmhelpers/fetch/archiveurl.py 2013-11-18 02:24:35 +0000 |
2315 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-11-19 17:44:21 +0000 |
2316 | @@ -1,5 +1,7 @@ |
2317 | import os |
2318 | import urllib2 |
2319 | +import urlparse |
2320 | + |
2321 | from charmhelpers.fetch import ( |
2322 | BaseFetchHandler, |
2323 | UnhandledSource |
2324 | @@ -24,6 +26,19 @@ |
2325 | def download(self, source, dest): |
2326 | # propogate all exceptions |
2327 | # URLError, OSError, etc |
2328 | + proto, netloc, path, params, query, fragment = urlparse.urlparse(source) |
2329 | + if proto in ('http', 'https'): |
2330 | + auth, barehost = urllib2.splituser(netloc) |
2331 | + if auth is not None: |
2332 | + source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) |
2333 | + username, password = urllib2.splitpasswd(auth) |
2334 | + passman = urllib2.HTTPPasswordMgrWithDefaultRealm() |
2335 | + # Realm is set to None in add_password to force the username and password |
2336 | + # to be used whatever the realm |
2337 | + passman.add_password(None, source, username, password) |
2338 | + authhandler = urllib2.HTTPBasicAuthHandler(passman) |
2339 | + opener = urllib2.build_opener(authhandler) |
2340 | + urllib2.install_opener(opener) |
2341 | response = urllib2.urlopen(source) |
2342 | try: |
2343 | with open(dest, 'w') as dest_file: |
2344 | |
2345 | === modified file 'hooks/hooks.py' |
2346 | --- hooks/hooks.py 2014-05-02 16:00:17 +0000 |
2347 | +++ hooks/hooks.py 2014-11-19 17:44:21 +0000 |
2348 | @@ -1,8 +1,6 @@ |
2349 | #!/usr/bin/env python |
2350 | # vim: et ai ts=4 sw=4: |
2351 | |
2352 | -import base64 |
2353 | -import json |
2354 | import os |
2355 | import re |
2356 | import socket |
2357 | @@ -36,19 +34,12 @@ |
2358 | open_port, |
2359 | close_port, |
2360 | unit_private_ip, |
2361 | - CRITICAL, |
2362 | ERROR, |
2363 | - WARNING, |
2364 | - INFO, |
2365 | DEBUG, |
2366 | ) |
2367 | |
2368 | -import charmhelpers.contrib.ansible |
2369 | - |
2370 | hooks = Hooks() |
2371 | |
2372 | -ansible_vars_path = '/etc/ansible/host_vars/localhost' |
2373 | - |
2374 | CHARM_PACKAGES = ["python-pip", "python-jinja2", "mercurial", "git-core", |
2375 | "subversion", "bzr", "gettext"] |
2376 | |
2377 | @@ -72,9 +63,10 @@ |
2378 | else: |
2379 | raise |
2380 | |
2381 | -#------------------------------------------------------------------------------ |
2382 | + |
2383 | +# ------------------------------------------------------------------------------ |
2384 | # pip_install( package ): Installs a python package |
2385 | -#------------------------------------------------------------------------------ |
2386 | +# ------------------------------------------------------------------------------ |
2387 | def pip_install(packages=None, upgrade=False): |
2388 | # Build in /tmp or Juju's internal git will be confused |
2389 | cmd_line = ['pip', 'install', '-b', '/tmp/', '--src', '/usr/src/'] |
2390 | @@ -95,10 +87,25 @@ |
2391 | return(subprocess.call(cmd_line)) |
2392 | |
2393 | |
2394 | +# ------------------------------------------------------------------------------ |
2395 | +# pip_install_req( path ): Installs a requirements file |
2396 | +# ------------------------------------------------------------------------------ |
2397 | +def pip_install_req(path_or_url=None, upgrade=False): |
2398 | + # Build in /tmp or Juju's internal git will be confused |
2399 | + cmd_line = ['pip', 'install', '-b', '/tmp/', '--src', '/usr/src/'] |
2400 | + if path_or_url is None: |
2401 | + return(False) |
2402 | + if upgrade: |
2403 | + cmd_line.append('--upgrade') |
2404 | + cmd_line.append('-r') |
2405 | + cmd_line.append(path_or_url) |
2406 | + cmd_line.append('--use-mirrors') |
2407 | + return(subprocess.call(cmd_line)) |
2408 | + |
2409 | + |
2410 | # |
2411 | # Utils |
2412 | # |
2413 | - |
2414 | def install_or_append(contents, dest, owner="root", group="root", mode=0600): |
2415 | if os.path.exists(dest): |
2416 | uid = getpwnam(owner)[2] |
2417 | @@ -117,6 +124,7 @@ |
2418 | return False |
2419 | return True |
2420 | |
2421 | + |
2422 | def sanitize(s): |
2423 | s = s.replace(':', '_') |
2424 | s = s.replace('-', '_') |
2425 | @@ -131,7 +139,7 @@ |
2426 | from jinja2 import Environment, FileSystemLoader |
2427 | template_env = Environment( |
2428 | loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], |
2429 | - 'templates'))) |
2430 | + 'templates'))) |
2431 | |
2432 | template = \ |
2433 | template_env.get_template(template_name).render(template_vars) |
2434 | @@ -159,7 +167,7 @@ |
2435 | return apt_install(distro, options=['--force-yes']) |
2436 | elif rel[:3] == "deb": |
2437 | l = len(rel.split('|')) |
2438 | - if l == 2: |
2439 | + if l == 2: |
2440 | src, key = rel.split('|') |
2441 | log("Importing PPA key from keyserver for %s" % src) |
2442 | _import_key(key) |
2443 | @@ -177,6 +185,7 @@ |
2444 | else: |
2445 | return pip_install(rel) |
2446 | |
2447 | + |
2448 | # |
2449 | # from: |
2450 | # http://stackoverflow.com/questions/377017/test-if-executable-exists-in-python |
2451 | @@ -193,6 +202,7 @@ |
2452 | |
2453 | return False |
2454 | |
2455 | + |
2456 | def find_django_admin_cmd(): |
2457 | for cmd in ['django-admin.py', 'django-admin']: |
2458 | django_admin_cmd = which(cmd) |
2459 | @@ -202,13 +212,14 @@ |
2460 | |
2461 | log("No django-admin executable found.", ERROR) |
2462 | |
2463 | + |
2464 | def append_template(template_name, template_vars, path, try_append=False): |
2465 | |
2466 | # --- exported service configuration file |
2467 | from jinja2 import Environment, FileSystemLoader |
2468 | template_env = Environment( |
2469 | loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], |
2470 | - 'templates'))) |
2471 | + 'templates'))) |
2472 | |
2473 | template = \ |
2474 | template_env.get_template(template_name).render(template_vars) |
2475 | @@ -221,7 +232,7 @@ |
2476 | else: |
2477 | append = True |
2478 | |
2479 | - if append == True: |
2480 | + if append: |
2481 | with open(path, 'a') as inject_file: |
2482 | inject_file.write(INJECTED_WARNING) |
2483 | inject_file.write(str(template)) |
2484 | @@ -250,6 +261,10 @@ |
2485 | |
2486 | break |
2487 | |
2488 | + if extra_pip_pkgs: |
2489 | + for package in extra_pip_pkgs.split(','): |
2490 | + pip_install(package, upgrade=True) |
2491 | + |
2492 | configure_and_install(django_version) |
2493 | |
2494 | if django_south: |
2495 | @@ -267,28 +282,40 @@ |
2496 | run('%s %s %s' % (cmd, sanitized_unit_name, install_root), exit_on_error=False) |
2497 | except subprocess.CalledProcessError: |
2498 | run('%s %s' % (cmd, sanitized_unit_name), cwd=install_root) |
2499 | + elif vcs == 'hg' or vcs == 'mercurial': |
2500 | + run('hg clone %s %s' % (repos_url, vcs_clone_dir)) |
2501 | + elif vcs == 'git' or vcs == 'git-core': |
2502 | + if repos_branch: |
2503 | + run('git clone %s -b %s %s' % (repos_url, repos_branch, vcs_clone_dir)) |
2504 | + else: |
2505 | + run('git clone %s %s' % (repos_url, vcs_clone_dir)) |
2506 | + elif vcs == 'bzr' or vcs == 'bazaar': |
2507 | + run('bzr branch %s %s' % (repos_url, vcs_clone_dir)) |
2508 | + elif vcs == 'svn' or vcs == 'subversion': |
2509 | + run('svn co %s %s' % (repos_url, vcs_clone_dir)) |
2510 | else: |
2511 | - charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True) |
2512 | - charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml') |
2513 | + log("Unknown version control", ERROR) |
2514 | + sys.exit(1) |
2515 | |
2516 | run('chown -R %s:%s %s' % (wsgi_user, wsgi_group, vcs_clone_dir)) |
2517 | |
2518 | mkdir(settings_dir_path, owner=wsgi_user, group=wsgi_group, perms=0755) |
2519 | mkdir(urls_dir_path, owner=wsgi_user, group=wsgi_group, perms=0755) |
2520 | |
2521 | - #FIXME: Upgrades/pulls will mess those files |
2522 | + # FIXME: Upgrades/pulls will mess those files |
2523 | |
2524 | append_template('conf_injection.tmpl', {'dir': settings_dir_name}, settings_py_path) |
2525 | append_template('urls_injection.tmpl', {'dir': urls_dir_name}, urls_py_path) |
2526 | |
2527 | + if requirements_pip_files: |
2528 | + for req_file in requirements_pip_files.split(','): |
2529 | + pip_install_req(os.path.join(working_dir, req_file)) |
2530 | + |
2531 | wsgi_py_path = os.path.join(working_dir, 'wsgi.py') |
2532 | if not os.path.exists(wsgi_py_path): |
2533 | - process_template('wsgi.py.tmpl', {'project_name': sanitized_unit_name, \ |
2534 | - 'django_settings': settings_module}, \ |
2535 | + process_template('wsgi.py.tmpl', {'project_name': sanitized_unit_name, |
2536 | + 'django_settings': settings_module}, |
2537 | wsgi_py_path) |
2538 | - if unit_config: |
2539 | - with open(os.path.join(vcs_clone_dir, 'unit_config'), 'w') as f: |
2540 | - f.write(base64.b64decode(unit_config)) |
2541 | |
2542 | |
2543 | @hooks.hook() |
2544 | @@ -349,7 +376,15 @@ |
2545 | |
2546 | break |
2547 | |
2548 | - charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml') |
2549 | + # FIXME: pull new code ? |
2550 | + |
2551 | + if extra_pip_pkgs: |
2552 | + for package in extra_pip_pkgs.split(','): |
2553 | + pip_install(package, upgrade=True) |
2554 | + |
2555 | + if requirements_pip_files: |
2556 | + for req_file in requirements_pip_files.split(','): |
2557 | + pip_install_req(os.path.join(working_dir, req_file), upgrade=True) |
2558 | |
2559 | run('chown -R %s:%s %s' % (wsgi_user, wsgi_group, vcs_clone_dir)) |
2560 | |
2561 | @@ -423,9 +458,10 @@ |
2562 | 'db_host': relation_get("host"), |
2563 | } |
2564 | |
2565 | - process_template('pgsql_engine.tmpl', templ_vars, settings_database_path % {'engine_name': 'pgsql'}) |
2566 | + process_template('pgsql_engine.tmpl', templ_vars, |
2567 | + settings_database_path % {'engine_name': 'pgsql'}) |
2568 | |
2569 | - run("%s syncdb --noinput --settings=%s" % \ |
2570 | + run("%s syncdb --noinput --settings=%s" % |
2571 | (django_admin_cmd, settings_module), |
2572 | cwd=working_dir) |
2573 | |
2574 | @@ -433,7 +469,7 @@ |
2575 | south_config_file = os.path.join(settings_dir_path, '50-south.py') |
2576 | process_template('south.tmpl', {}, south_config_file) |
2577 | |
2578 | - run("%s migrate --settings=%s" % \ |
2579 | + run("%s migrate --settings=%s" % |
2580 | (django_admin_cmd, settings_module), |
2581 | cwd=working_dir) |
2582 | |
2583 | @@ -453,7 +489,9 @@ |
2584 | relation_set(relation_settings={'wsgi_timestamp': time.time()}, relation_id=relid) |
2585 | |
2586 | |
2587 | -@hooks.hook('mysql-relation-joined', 'mysql-relation-changed') |
2588 | +@hooks.hook('mysql-relation-joined', 'mysql-relation-changed', |
2589 | + 'mysql-root-relation-joined', 'mysql-root-relation-changed', |
2590 | + 'mysql-shared-relation-joined', 'mysql-shared-relation-changed') |
2591 | def mysql_relation_joined_changed(): |
2592 | os.environ['DJANGO_SETTINGS_MODULE'] = settings_module |
2593 | django_admin_cmd = find_django_admin_cmd() |
2594 | @@ -475,7 +513,7 @@ |
2595 | |
2596 | process_template('mysql_engine.tmpl', templ_vars, settings_database_path % {'engine_name': 'mysql'}) |
2597 | |
2598 | - run("%s syncdb --noinput --settings=%s" % \ |
2599 | + run("%s syncdb --noinput --settings=%s" % |
2600 | (django_admin_cmd, settings_module), |
2601 | cwd=working_dir) |
2602 | |
2603 | @@ -483,7 +521,7 @@ |
2604 | south_config_file = os.path.join(settings_dir_path, '50-south.py') |
2605 | process_template('south.tmpl', {}, south_config_file) |
2606 | |
2607 | - run("%s migrate --settings=%s" % \ |
2608 | + run("%s migrate --settings=%s" % |
2609 | (django_admin_cmd, settings_module), |
2610 | cwd=working_dir) |
2611 | |
2612 | @@ -494,7 +532,8 @@ |
2613 | relation_set(relation_settings={'wsgi_timestamp': time.time()}, relation_id=relid) |
2614 | |
2615 | |
2616 | -@hooks.hook('mysql-relation-broken') |
2617 | +@hooks.hook('mysql-relation-broken', 'mysql-root-relation-broken', |
2618 | + 'mysql-shared-relation-broken') |
2619 | def mysql_relation_broken(): |
2620 | run('rm %s' % settings_database_path % {'engine_name': 'mysql'}) |
2621 | |
2622 | @@ -587,8 +626,8 @@ |
2623 | |
2624 | @hooks.hook('amqp-relation-joined') |
2625 | def amqp_relation_joined(): |
2626 | - relation_set(relation_settings= |
2627 | - {'username': sanitized_unit_name, 'vhost': sanitized_unit_name}) |
2628 | + relation_set(relation_settings={ |
2629 | + 'username': sanitized_unit_name, 'vhost': sanitized_unit_name}) |
2630 | |
2631 | |
2632 | @hooks.hook('amqp-relation-changed') |
2633 | @@ -614,7 +653,7 @@ |
2634 | |
2635 | run('chown -R %s:%s %s' % (wsgi_user, wsgi_group, vcs_clone_dir)) |
2636 | |
2637 | - run("%s syncdb --noinput --settings=%s" % \ |
2638 | + run("%s syncdb --noinput --settings=%s" % |
2639 | (django_admin_cmd, settings_module), |
2640 | cwd=working_dir) |
2641 | |
2642 | @@ -702,7 +741,6 @@ |
2643 | django_settings = config_data['django_settings'] |
2644 | settings_dir_name = config_data['settings_dir_name'] |
2645 | urls_dir_name = config_data['urls_dir_name'] |
2646 | -unit_config = config_data.get('unit-config') |
2647 | django_south = config_data['django_south'] |
2648 | django_south_version = config_data['django_south_version'] |
2649 | |
2650 | @@ -721,16 +759,19 @@ |
2651 | python_path = os.path.join(working_dir, '../') |
2652 | |
2653 | if django_settings: |
2654 | - settings_module = django_settings #andy hack |
2655 | + settings_module = django_settings # andy hack |
2656 | else: |
2657 | - settings_module = '.'.join([sanitized_unit_name, 'settings']) |
2658 | + if application_path: |
2659 | + settings_module = '.'.join([os.path.basename(working_dir), 'settings']) |
2660 | + else: |
2661 | + settings_module = '.'.join([sanitized_unit_name, 'settings']) |
2662 | |
2663 | django_run_dir = os.path.join(working_dir, "run/") |
2664 | django_logs_dir = os.path.join(working_dir, "logs/") |
2665 | |
2666 | settings_injection_path = config_data['settings_injection_path'] |
2667 | settings_py_path = os.path.join(working_dir, settings_injection_path) |
2668 | -urls_injection_path = config_data['urls_injection_path'] |
2669 | +urls_injection_path = config_data['urls_injection_path'] |
2670 | urls_py_path = os.path.join(working_dir, urls_injection_path) |
2671 | settings_dir_path = os.path.join(working_dir, os.path.dirname(settings_injection_path), settings_dir_name) |
2672 | urls_dir_path = os.path.join(working_dir, os.path.dirname(urls_injection_path), urls_dir_name) |
2673 | |
2674 | === added symlink 'hooks/mysql-root-relation-broken' |
2675 | === target is u'hooks.py' |
2676 | === added symlink 'hooks/mysql-root-relation-changed' |
2677 | === target is u'hooks.py' |
2678 | === added symlink 'hooks/mysql-root-relation-joined' |
2679 | === target is u'hooks.py' |
2680 | === added symlink 'hooks/mysql-shared-relation-broken' |
2681 | === target is u'hooks.py' |
2682 | === added symlink 'hooks/mysql-shared-relation-changed' |
2683 | === target is u'hooks.py' |
2684 | === added symlink 'hooks/mysql-shared-relation-joined' |
2685 | === target is u'hooks.py' |
2686 | === added directory 'hooks/tests' |
2687 | === added file 'hooks/tests/test_hooks.py' |
2688 | --- hooks/tests/test_hooks.py 1970-01-01 00:00:00 +0000 |
2689 | +++ hooks/tests/test_hooks.py 2014-11-19 17:44:21 +0000 |
2690 | @@ -0,0 +1,181 @@ |
2691 | +from unittest import TestCase |
2692 | +from mock import patch |
2693 | + |
2694 | +import yaml |
2695 | + |
2696 | +import hooks |
2697 | + |
2698 | + |
2699 | +def load_config_defaults(): |
2700 | + with open("config.yaml") as conf: |
2701 | + config_schema = yaml.safe_load(conf) |
2702 | + defaults = {} |
2703 | + for key, value in config_schema['options'].items(): |
2704 | + defaults[key] = value['default'] |
2705 | + return defaults |
2706 | + |
2707 | +DEFAULTS = load_config_defaults() |
2708 | + |
2709 | + |
2710 | +class HookTestCase(TestCase): |
2711 | + maxDiff = None |
2712 | + |
2713 | + SERVICE_NAME = 'some_juju_service' |
2714 | + WORKING_DIR = '/some_path' |
2715 | + |
2716 | + _object = object() |
2717 | + mocks = {} |
2718 | + |
2719 | + def apply_patch(self, name, value=_object, return_value=_object): |
2720 | + if value is not self._object: |
2721 | + patcher = patch(name, value) |
2722 | + else: |
2723 | + patcher = patch(name) |
2724 | + |
2725 | + mock_obj = patcher.start() |
2726 | + self.addCleanup(patcher.stop) |
2727 | + |
2728 | + if value is self._object and return_value is not self._object: |
2729 | + mock_obj.return_value = return_value |
2730 | + |
2731 | + self.mocks[name] = mock_obj |
2732 | + return mock_obj |
2733 | + |
2734 | + def setUp(self): |
2735 | + super(HookTestCase, self).setUp() |
2736 | + # There's quite a bit of mocking here, due to the large amounts of |
2737 | + # environment context to juju hooks |
2738 | + |
2739 | + self.config = DEFAULTS.copy() |
2740 | + self.relation_data = {'working_dir': self.WORKING_DIR} |
2741 | + |
2742 | + # intercept all charmsupport usage |
2743 | + self.hookenv = self.apply_patch('hooks.hookenv') |
2744 | + self.fetch = self.apply_patch('hooks.fetch') |
2745 | + self.host = self.apply_patch('hooks.host') |
2746 | + |
2747 | + self.hookenv.config.return_value = self.config |
2748 | + self.hookenv.relations_of_type.return_value = [self.relation_data] |
2749 | + |
2750 | + # mocking utilities that touch the host/environment |
2751 | + self.process_template = self.apply_patch('hooks.process_template') |
2752 | + self.apply_patch( |
2753 | + 'hooks.sanitized_service_name', return_value=self.SERVICE_NAME) |
2754 | + self.apply_patch('hooks.cpu_count', return_value=1) |
2755 | + |
2756 | + def assert_wsgi_config_applied(self, expected): |
2757 | + tmpl, config, path = self.process_template.call_args[0] |
2758 | + self.assertEqual(tmpl, 'upstart.tmpl') |
2759 | + self.assertEqual(path, '/etc/init/%s.conf' % self.SERVICE_NAME) |
2760 | + self.assertEqual(config, expected) |
2761 | + self.host.service_restart.assert_called_once_with(self.SERVICE_NAME) |
2762 | + |
2763 | + def get_default_context(self): |
2764 | + expected = DEFAULTS.copy() |
2765 | + expected['unit_name'] = self.SERVICE_NAME |
2766 | + expected['working_dir'] = self.WORKING_DIR |
2767 | + expected['project_name'] = self.SERVICE_NAME |
2768 | + expected['wsgi_workers'] = 2 |
2769 | + expected['env_extra'] = [] |
2770 | + fmt = expected['wsgi_access_logformat'].replace('"', '\\"') |
2771 | + expected['wsgi_access_logformat'] = fmt |
2772 | + return expected |
2773 | + |
2774 | + def test_python_install_hook(self): |
2775 | + hooks.install() |
2776 | + self.assertTrue(self.fetch.apt_update.called) |
2777 | + self.fetch.apt_install.assert_called_once_with( |
2778 | + ['gunicorn', 'python-jinja2']) |
2779 | + |
2780 | + @patch('hooks.glob.glob') |
2781 | + @patch('os.remove') |
2782 | + def test_python_upgrade_hook(self, mock_remove, mock_glob): |
2783 | + path = '/etc/gunicorn.d/unit.conf' |
2784 | + mock_glob.return_value = [path] |
2785 | + hooks.upgrade() |
2786 | + self.assertTrue(self.fetch.apt_update.called) |
2787 | + self.fetch.apt_install.assert_called_once_with( |
2788 | + ['gunicorn', 'python-jinja2']) |
2789 | + |
2790 | + self.host.service_stop.assert_called_once_with('gunicorn') |
2791 | + mock_remove.assert_called_once_with(path) |
2792 | + |
2793 | + def test_default_configure_gunicorn(self): |
2794 | + hooks.configure_gunicorn() |
2795 | + expected = self.get_default_context() |
2796 | + self.assert_wsgi_config_applied(expected) |
2797 | + |
2798 | + def test_configure_gunicorn_no_working_dir(self): |
2799 | + del self.relation_data['working_dir'] |
2800 | + hooks.configure_gunicorn() |
2801 | + self.assertFalse(self.process_template.called) |
2802 | + self.assertFalse(self.host.service_restart.called) |
2803 | + |
2804 | + def test_configure_gunicorn_relation_data(self): |
2805 | + self.relation_data['port'] = 9999 |
2806 | + self.relation_data['wsgi_workers'] = 1 |
2807 | + self.relation_data['unknown'] = 'value' |
2808 | + |
2809 | + hooks.configure_gunicorn() |
2810 | + |
2811 | + self.assertFalse(self.fetch.apt_install.called) |
2812 | + |
2813 | + expected = self.get_default_context() |
2814 | + expected['wsgi_workers'] = 1 |
2815 | + expected['port'] = 9999 |
2816 | + |
2817 | + self.assert_wsgi_config_applied(expected) |
2818 | + |
2819 | + def test_env_extra_parsing(self): |
2820 | + self.relation_data['env_extra'] = 'A=1 B="2" C="3 4" D= E' |
2821 | + |
2822 | + hooks.configure_gunicorn() |
2823 | + |
2824 | + expected = self.get_default_context() |
2825 | + expected['env_extra'] = [ |
2826 | + ['A', '1'], |
2827 | + ['B', '2'], |
2828 | + ['C', '3 4'], |
2829 | + ['D', ''], |
2830 | + # no E |
2831 | + ] |
2832 | + |
2833 | + self.assert_wsgi_config_applied(expected) |
2834 | + |
2835 | + def test_env_extra_old_style_parsing(self): |
2836 | + self.relation_data['env_extra'] = "'A': '1', 'B': 2" |
2837 | + |
2838 | + hooks.configure_gunicorn() |
2839 | + |
2840 | + expected = self.get_default_context() |
2841 | + expected['env_extra'] = [ |
2842 | + ['A', '1'], |
2843 | + ['B', '2'], |
2844 | + ] |
2845 | + |
2846 | + self.assert_wsgi_config_applied(expected) |
2847 | + |
2848 | + def do_worker_class(self, worker_class): |
2849 | + self.relation_data['wsgi_worker_class'] = worker_class |
2850 | + hooks.configure_gunicorn() |
2851 | + self.fetch.apt_install.assert_called_once_with( |
2852 | + 'python-%s' % worker_class) |
2853 | + expected = self.get_default_context() |
2854 | + expected['wsgi_worker_class'] = worker_class |
2855 | + self.assert_wsgi_config_applied(expected) |
2856 | + |
2857 | + def test_configure_worker_class_eventlet(self): |
2858 | + self.do_worker_class('eventlet') |
2859 | + |
2860 | + def test_configure_worker_class_tornado(self): |
2861 | + self.do_worker_class('tornado') |
2862 | + |
2863 | + def test_configure_worker_class_gevent(self): |
2864 | + self.do_worker_class('gevent') |
2865 | + |
2866 | + @patch('hooks.os.remove') |
2867 | + def test_wsgi_file_relation_broken(self, remove): |
2868 | + hooks.wsgi_file_relation_broken() |
2869 | + self.host.service_stop.assert_called_once_with(self.SERVICE_NAME) |
2870 | + remove.assert_called_once_with( |
2871 | + '/etc/init/%s.conf' % self.SERVICE_NAME) |
2872 | |
2873 | === added file 'hooks/tests/test_template.py' |
2874 | --- hooks/tests/test_template.py 1970-01-01 00:00:00 +0000 |
2875 | +++ hooks/tests/test_template.py 2014-11-19 17:44:21 +0000 |
2876 | @@ -0,0 +1,125 @@ |
2877 | +import os |
2878 | +from unittest import TestCase |
2879 | +from mock import patch, MagicMock |
2880 | + |
2881 | +import hooks |
2882 | + |
2883 | +EXPECTED = """ |
2884 | +#-------------------------------------------------------------- |
2885 | +# This file is managed by Juju; ANY CHANGES WILL BE OVERWRITTEN |
2886 | +#-------------------------------------------------------------- |
2887 | + |
2888 | +description "Gunicorn daemon for the PROJECT_NAME project" |
2889 | + |
2890 | +start on (local-filesystems and net-device-up IFACE=eth0) |
2891 | +stop on runlevel [!12345] |
2892 | + |
2893 | +# If the process quits unexpectadly trigger a respawn |
2894 | +respawn |
2895 | +respawn limit 10 5 |
2896 | + |
2897 | +setuid WSGI_USER |
2898 | +setgid WSGI_GROUP |
2899 | +chdir WORKING_DIR |
2900 | + |
2901 | +# This line can be removed and replace with the --pythonpath PYTHON_PATH \\ |
2902 | +# option with Gunicorn>1.17 |
2903 | +env PYTHONPATH=PYTHON_PATH |
2904 | +env A="1" |
2905 | +env B="1 2" |
2906 | + |
2907 | + |
2908 | +exec gunicorn \\ |
2909 | + --name=PROJECT_NAME \\ |
2910 | + --workers=WSGI_WORKERS \\ |
2911 | + --worker-class=WSGI_WORKER_CLASS \\ |
2912 | + --worker-connections=WSGI_WORKER_CONNECTIONS \\ |
2913 | + --max-requests=WSGI_MAX_REQUESTS \\ |
2914 | + --backlog=WSGI_BACKLOG \\ |
2915 | + --timeout=WSGI_TIMEOUT \\ |
2916 | + --keep-alive=WSGI_KEEP_ALIVE \\ |
2917 | + --umask=WSGI_UMASK \\ |
2918 | + --bind=LISTEN_IP:PORT \\ |
2919 | + --log-file=WSGI_LOG_FILE \\ |
2920 | + --log-level=WSGI_LOG_LEVEL \\ |
2921 | + --access-logfile=WSGI_ACCESS_LOGFILE \\ |
2922 | + --access-logformat=WSGI_ACCESS_LOGFORMAT \\ |
2923 | + WSGI_EXTRA \\ |
2924 | + WSGI_WSGI_FILE |
2925 | +""".strip() |
2926 | + |
2927 | + |
2928 | +class TemplateTestCase(TestCase): |
2929 | + maxDiff = None |
2930 | + |
2931 | + def setUp(self): |
2932 | + super(TemplateTestCase, self).setUp() |
2933 | + patch_open = patch('hooks.open', create=True) |
2934 | + self.open = patch_open.start() |
2935 | + self.addCleanup(patch_open.stop) |
2936 | + |
2937 | + self.open.return_value = MagicMock(spec=file) |
2938 | + self.file = self.open.return_value.__enter__.return_value |
2939 | + |
2940 | + patch_environ = patch.dict(os.environ, CHARM_DIR='.') |
2941 | + patch_environ.start() |
2942 | + self.addCleanup(patch_environ.stop) |
2943 | + |
2944 | + patch_hookenv = patch('hooks.hookenv') |
2945 | + patch_hookenv.start() |
2946 | + self.addCleanup(patch_hookenv.stop) |
2947 | + |
2948 | + def get_test_context(self): |
2949 | + keys = [ |
2950 | + 'project_name', |
2951 | + 'wsgi_user', |
2952 | + 'wsgi_group', |
2953 | + 'working_dir', |
2954 | + 'python_path', |
2955 | + 'wsgi_workers', |
2956 | + 'wsgi_worker_class', |
2957 | + 'wsgi_worker_connections', |
2958 | + 'wsgi_max_requests', |
2959 | + 'wsgi_backlog', |
2960 | + 'wsgi_timeout', |
2961 | + 'wsgi_keep_alive', |
2962 | + 'wsgi_umask', |
2963 | + 'wsgi_log_file', |
2964 | + 'wsgi_log_level', |
2965 | + 'wsgi_access_logfile', |
2966 | + 'wsgi_access_logformat', |
2967 | + 'listen_ip', |
2968 | + 'port', |
2969 | + 'wsgi_extra', |
2970 | + 'wsgi_wsgi_file', |
2971 | + ] |
2972 | + ctx = dict((k, k.upper()) for k in keys) |
2973 | + ctx['env_extra'] = [["A", "1"], ["B", "1 2"]] |
2974 | + return ctx |
2975 | + |
2976 | + def test_template(self): |
2977 | + |
2978 | + ctx = self.get_test_context() |
2979 | + |
2980 | + hooks.process_template('upstart.tmpl', ctx, 'path') |
2981 | + output = self.file.write.call_args[0][0] |
2982 | + |
2983 | + self.assertMultiLineEqual(EXPECTED, output) |
2984 | + |
2985 | + def test_no_access_logfile(self): |
2986 | + ctx = self.get_test_context() |
2987 | + ctx['wsgi_access_logfile'] = "" |
2988 | + |
2989 | + hooks.process_template('upstart.tmpl', ctx, 'path') |
2990 | + output = self.file.write.call_args[0][0] |
2991 | + |
2992 | + self.assertNotIn('--access-logfile', output) |
2993 | + |
2994 | + def test_no_access_logformat(self): |
2995 | + ctx = self.get_test_context() |
2996 | + ctx['wsgi_access_logformat'] = "" |
2997 | + |
2998 | + hooks.process_template('upstart.tmpl', ctx, 'path') |
2999 | + output = self.file.write.call_args[0][0] |
3000 | + |
3001 | + self.assertNotIn('--access-logformat', output) |
3002 | |
3003 | === added file 'hooks/tests/test_utils.py' |
3004 | --- hooks/tests/test_utils.py 1970-01-01 00:00:00 +0000 |
3005 | +++ hooks/tests/test_utils.py 2014-11-19 17:44:21 +0000 |
3006 | @@ -0,0 +1,100 @@ |
3007 | +import logging |
3008 | +import unittest |
3009 | +import os |
3010 | +import yaml |
3011 | + |
3012 | +from mock import patch |
3013 | + |
3014 | + |
3015 | +def load_config(): |
3016 | + ''' |
3017 | + Walk backwords from __file__ looking for config.yaml, load and return the |
3018 | + 'options' section' |
3019 | + ''' |
3020 | + config = None |
3021 | + f = __file__ |
3022 | + while config is None: |
3023 | + d = os.path.dirname(f) |
3024 | + if os.path.isfile(os.path.join(d, 'config.yaml')): |
3025 | + config = os.path.join(d, 'config.yaml') |
3026 | + break |
3027 | + f = d |
3028 | + |
3029 | + if not config: |
3030 | + logging.error('Could not find config.yaml in any parent directory ' |
3031 | + 'of %s. ' % file) |
3032 | + raise Exception |
3033 | + |
3034 | + return yaml.safe_load(open(config).read())['options'] |
3035 | + |
3036 | + |
3037 | +def get_default_config(): |
3038 | + ''' |
3039 | + Load default charm config from config.yaml return as a dict. |
3040 | + If no default is set in config.yaml, its value is None. |
3041 | + ''' |
3042 | + default_config = {} |
3043 | + config = load_config() |
3044 | + for k, v in config.iteritems(): |
3045 | + if 'default' in v: |
3046 | + default_config[k] = v['default'] |
3047 | + else: |
3048 | + default_config[k] = None |
3049 | + return default_config |
3050 | + |
3051 | + |
3052 | +class CharmTestCase(unittest.TestCase): |
3053 | + |
3054 | + def setUp(self, obj, patches): |
3055 | + super(CharmTestCase, self).setUp() |
3056 | + self.patches = patches |
3057 | + self.obj = obj |
3058 | + self.test_config = TestConfig() |
3059 | + self.test_relation = TestRelation() |
3060 | + self.patch_all() |
3061 | + |
3062 | + def patch(self, method): |
3063 | + _m = patch.object(self.obj, method) |
3064 | + mock = _m.start() |
3065 | + self.addCleanup(_m.stop) |
3066 | + return mock |
3067 | + |
3068 | + def patch_all(self): |
3069 | + for method in self.patches: |
3070 | + setattr(self, method, self.patch(method)) |
3071 | + |
3072 | + |
3073 | +class TestConfig(object): |
3074 | + |
3075 | + def __init__(self): |
3076 | + self.config = get_default_config() |
3077 | + |
3078 | + def get(self, attr): |
3079 | + try: |
3080 | + return self.config[attr] |
3081 | + except KeyError: |
3082 | + return None |
3083 | + |
3084 | + def get_all(self): |
3085 | + return self.config |
3086 | + |
3087 | + def set(self, attr, value): |
3088 | + if attr not in self.config: |
3089 | + raise KeyError |
3090 | + self.config[attr] = value |
3091 | + |
3092 | + |
3093 | +class TestRelation(object): |
3094 | + |
3095 | + def __init__(self, relation_data={}): |
3096 | + self.relation_data = relation_data |
3097 | + |
3098 | + def set(self, relation_data): |
3099 | + self.relation_data = relation_data |
3100 | + |
3101 | + def get(self, attr=None, unit=None, rid=None): |
3102 | + if attr is None: |
3103 | + return self.relation_data |
3104 | + elif attr in self.relation_data: |
3105 | + return self.relation_data[attr] |
3106 | + return None |
3107 | |
3108 | === modified file 'metadata.yaml' |
3109 | --- metadata.yaml 2014-10-31 15:31:30 +0000 |
3110 | +++ metadata.yaml 2014-11-19 17:44:21 +0000 |
3111 | @@ -26,6 +26,10 @@ |
3112 | mysql: |
3113 | interface: mysql |
3114 | optional: true |
3115 | + mysql-root: |
3116 | + interface: mysql-root |
3117 | + mysql-shared: |
3118 | + interface: mysql-shared |
3119 | mongodb: |
3120 | interface: mongodb |
3121 | optional: true |
3122 | @@ -38,6 +42,3 @@ |
3123 | cache: |
3124 | interface: memcache |
3125 | optional: true |
3126 | - lander-jenkins: |
3127 | - interface: lander-jenkins |
3128 | - optional: true |
3129 | |
3130 | === removed directory 'playbooks' |
3131 | === removed file 'playbooks/django_manage.yaml' |
3132 | --- playbooks/django_manage.yaml 2014-04-09 13:50:10 +0000 |
3133 | +++ playbooks/django_manage.yaml 1970-01-01 00:00:00 +0000 |
3134 | @@ -1,5 +0,0 @@ |
3135 | -- django_manage: |
3136 | - command=syncdb |
3137 | - app_path={{install_root}}/{{local_unit|dirname|replace("-","_")}} |
3138 | - settings={{local_unit|dirname|replace("-","_")}}.{{django_settings}} |
3139 | - pythonpath={{install_root}} |
3140 | |
3141 | === removed file 'playbooks/install.yaml' |
3142 | --- playbooks/install.yaml 2014-04-29 15:10:03 +0000 |
3143 | +++ playbooks/install.yaml 1970-01-01 00:00:00 +0000 |
3144 | @@ -1,45 +0,0 @@ |
3145 | -- hosts: localhost |
3146 | - user: root |
3147 | - |
3148 | - tasks: |
3149 | -# VCS |
3150 | - - name: get mercurial source |
3151 | - hg: repo={{ repos_url }} dest={{install_root}}/{{local_unit|dirname|replace("-","_")}} purge=yes |
3152 | - when: vcs == 'hg' or vcs == 'mercurial' and not repos_branch |
3153 | - |
3154 | - - name: get bzr source |
3155 | - bzr: name={{ repos_url }} dest={{install_root}}/{{local_unit|dirname|replace("-","_")}} |
3156 | - when: vcs == 'bzr' |
3157 | - |
3158 | - - name: get git source |
3159 | - git: repo={{ repos_url }} dest={{install_root}}/{{local_unit|dirname|replace("-","_")}} |
3160 | - when: vcs == 'git' and not repos_branch |
3161 | - |
3162 | - - name: get subversion source |
3163 | - subversion: repo={{ repos_url }} dest={{install_root}}/{{local_unit|dirname|replace("-","_")}} username={{repos_username}} password={{repos_password}} |
3164 | - when: vcs == 'svn' or vcs == 'subversion' |
3165 | - |
3166 | -# VCS + Branch |
3167 | - - name: get mercurial source with branch |
3168 | - hg: repo={{ repos_url }} dest={{install_root}}/{{local_unit|dirname|replace("-","_")}} purge=yes version={{ repos_branch }} |
3169 | - when: vcs == 'hg' or vcs == 'mercurial' and repos_branch |
3170 | - |
3171 | - - name: get git source with branch |
3172 | - git: repo={{ repos_url }} dest={{install_root}}/{{local_unit|dirname|replace("-","_")}} version={{ repos_branch }} |
3173 | - when: vcs == 'git' and repos_branch |
3174 | - |
3175 | -# PIP |
3176 | - - name: Install pip dependencies |
3177 | - pip: name={{ item }} extra_args="{{ pip_extra_args }}" |
3178 | - with_lines: echo "{{ additional_pip_packages }}" | tr "," "\n" |
3179 | - when: additional_pip_packages != '' |
3180 | - |
3181 | - - name: Install pip requirements |
3182 | - pip: requirements={{install_root}}/{{local_unit|dirname|replace("-","_")}}/{{ item }} extra_args="{{ pip_extra_args }}" |
3183 | - with_lines: echo "{{ requirements_pip_files }}" | tr "," "\n" | awk "! /^http/" |
3184 | - when: requirements_pip_files != '' |
3185 | - |
3186 | - - name: Install http pip requirements |
3187 | - pip: requirements={{ item }} extra_args="{{ pip_extra_args }}" |
3188 | - with_lines: echo "{{ requirements_pip_files }}" | tr "," "\n" | awk "/^http/" |
3189 | - when: requirements_pip_files != '' |
3190 | |
3191 | === modified file 'revision' |
3192 | --- revision 2014-04-17 15:51:54 +0000 |
3193 | +++ revision 2014-11-19 17:44:21 +0000 |
3194 | @@ -1,1 +1,1 @@ |
3195 | -6 |
3196 | +7 |
3197 | |
3198 | === removed file 'tests/00-setup' |
3199 | --- tests/00-setup 2014-10-31 15:25:34 +0000 |
3200 | +++ tests/00-setup 1970-01-01 00:00:00 +0000 |
3201 | @@ -1,13 +0,0 @@ |
3202 | -#!/bin/bash |
3203 | - |
3204 | -set -ex |
3205 | - |
3206 | -# Check if amulet is installed before adding repository and updating apt-get. |
3207 | -dpkg -s amulet |
3208 | -if [ $? -ne 0 ]; then |
3209 | - sudo add-apt-repository -y ppa:juju/stable |
3210 | - sudo apt-get update |
3211 | - sudo apt-get install -y amulet |
3212 | -fi |
3213 | - |
3214 | -# Install any additional python packages or software here. |
3215 | \ No newline at end of file |
3216 | |
3217 | === added file 'tests/01-dj13' |
3218 | --- tests/01-dj13 1970-01-01 00:00:00 +0000 |
3219 | +++ tests/01-dj13 2014-11-19 17:44:21 +0000 |
3220 | @@ -0,0 +1,47 @@ |
3221 | +#!/usr/bin/python3 |
3222 | +""" |
3223 | +This test creates a real deployment, and runs some checks against it. |
3224 | + |
3225 | +FIXME: revert to using ssh -q, stderr=STDOUT instead of 2>&1, stderr=PIPE once |
3226 | + lp:1281577 is addressed. |
3227 | +""" |
3228 | + |
3229 | +import logging |
3230 | +import unittest |
3231 | +import jujulib.deployer |
3232 | + |
3233 | +from os import getenv |
3234 | +from os.path import dirname, abspath, join |
3235 | + |
3236 | +from helpers import (check_url, juju_status, get_service_config, |
3237 | + find_address, get_service_conf, BaseTests) |
3238 | + |
3239 | +log = logging.getLogger(__file__) |
3240 | + |
3241 | + |
3242 | +def setUpModule(): |
3243 | + """Deploys Landscape via the charm. All the tests use this deployment.""" |
3244 | + deployer = jujulib.deployer.Deployer() |
3245 | + config_file = join( |
3246 | + dirname(dirname(abspath(__file__))), |
3247 | + "tests", "config", "django.yaml") |
3248 | + deployer.deploy(getenv("DEPLOYER_TARGET", "django13"), [config_file], |
3249 | + timeout=2000) |
3250 | + |
3251 | + |
3252 | + |
3253 | +class DjangoServiceTests(BaseTests): |
3254 | + def test_app(self): |
3255 | + """Verify that the APP service is up.""" |
3256 | + |
3257 | + frontend = find_address(juju_status(), "django13") |
3258 | + good_content = "Welcome to Django" |
3259 | + log.info("Polling. Waiting for app server: {}".format(frontend)) |
3260 | + check_url("http://{}:8080/".format(frontend), good_content, interval=30, |
3261 | + attempts=10, retry_unavailable=True) |
3262 | + |
3263 | + |
3264 | +if __name__ == "__main__": |
3265 | + logging.basicConfig( |
3266 | + level='DEBUG', format='%(asctime)s %(levelname)s %(message)s') |
3267 | + unittest.main(verbosity=2) |
3268 | |
3269 | === added file 'tests/01-dj14' |
3270 | --- tests/01-dj14 1970-01-01 00:00:00 +0000 |
3271 | +++ tests/01-dj14 2014-11-19 17:44:21 +0000 |
3272 | @@ -0,0 +1,47 @@ |
3273 | +#!/usr/bin/python3 |
3274 | +""" |
3275 | +This test creates a real deployment, and runs some checks against it. |
3276 | + |
3277 | +FIXME: revert to using ssh -q, stderr=STDOUT instead of 2>&1, stderr=PIPE once |
3278 | + lp:1281577 is addressed. |
3279 | +""" |
3280 | + |
3281 | +import logging |
3282 | +import unittest |
3283 | +import jujulib.deployer |
3284 | + |
3285 | +from os import getenv |
3286 | +from os.path import dirname, abspath, join |
3287 | + |
3288 | +from helpers import (check_url, juju_status, get_service_config, |
3289 | + find_address, get_service_conf, BaseTests) |
3290 | + |
3291 | +log = logging.getLogger(__file__) |
3292 | + |
3293 | + |
3294 | +def setUpModule(): |
3295 | + """Deploys Landscape via the charm. All the tests use this deployment.""" |
3296 | + deployer = jujulib.deployer.Deployer() |
3297 | + config_file = join( |
3298 | + dirname(dirname(abspath(__file__))), |
3299 | + "tests", "config", "django.yaml") |
3300 | + deployer.deploy(getenv("DEPLOYER_TARGET", "django14"), [config_file], |
3301 | + timeout=2000) |
3302 | + |
3303 | + |
3304 | + |
3305 | +class DjangoServiceTests(BaseTests): |
3306 | + def test_app(self): |
3307 | + """Verify that the APP service is up.""" |
3308 | + |
3309 | + frontend = find_address(juju_status(), "django14") |
3310 | + good_content = "Welcome to Django" |
3311 | + log.info("Polling. Waiting for app server: {}".format(frontend)) |
3312 | + check_url("http://{}:8080/".format(frontend), good_content, interval=30, |
3313 | + attempts=10, retry_unavailable=True) |
3314 | + |
3315 | + |
3316 | +if __name__ == "__main__": |
3317 | + logging.basicConfig( |
3318 | + level='DEBUG', format='%(asctime)s %(levelname)s %(message)s') |
3319 | + unittest.main(verbosity=2) |
3320 | |
3321 | === added file 'tests/01-djdistro' |
3322 | --- tests/01-djdistro 1970-01-01 00:00:00 +0000 |
3323 | +++ tests/01-djdistro 2014-11-19 17:44:21 +0000 |
3324 | @@ -0,0 +1,47 @@ |
3325 | +#!/usr/bin/python3 |
3326 | +""" |
3327 | +This test creates a real deployment, and runs some checks against it. |
3328 | + |
3329 | +FIXME: revert to using ssh -q, stderr=STDOUT instead of 2>&1, stderr=PIPE once |
3330 | + lp:1281577 is addressed. |
3331 | +""" |
3332 | + |
3333 | +import logging |
3334 | +import unittest |
3335 | +import jujulib.deployer |
3336 | + |
3337 | +from os import getenv |
3338 | +from os.path import dirname, abspath, join |
3339 | + |
3340 | +from helpers import (check_url, juju_status, get_service_config, |
3341 | + find_address, get_service_conf, BaseTests) |
3342 | + |
3343 | +log = logging.getLogger(__file__) |
3344 | + |
3345 | + |
3346 | +def setUpModule(): |
3347 | + """Deploys Landscape via the charm. All the tests use this deployment.""" |
3348 | + deployer = jujulib.deployer.Deployer() |
3349 | + config_file = join( |
3350 | + dirname(dirname(abspath(__file__))), |
3351 | + "tests", "config", "django.yaml") |
3352 | + deployer.deploy(getenv("DEPLOYER_TARGET", "djangodistro"), [config_file], |
3353 | + timeout=2000) |
3354 | + |
3355 | + |
3356 | + |
3357 | +class DjangoServiceTests(BaseTests): |
3358 | + def test_app(self): |
3359 | + """Verify that the APP service is up.""" |
3360 | + |
3361 | + frontend = find_address(juju_status(), "djangodistro") |
3362 | + good_content = "Welcome to Django" |
3363 | + log.info("Polling. Waiting for app server: {}".format(frontend)) |
3364 | + check_url("http://{}:8080/".format(frontend), good_content, interval=30, |
3365 | + attempts=10, retry_unavailable=True) |
3366 | + |
3367 | + |
3368 | +if __name__ == "__main__": |
3369 | + logging.basicConfig( |
3370 | + level='DEBUG', format='%(asctime)s %(levelname)s %(message)s') |
3371 | + unittest.main(verbosity=2) |
3372 | |
3373 | === removed file 'tests/01_deploy.test' |
3374 | --- tests/01_deploy.test 2014-10-31 15:25:34 +0000 |
3375 | +++ tests/01_deploy.test 1970-01-01 00:00:00 +0000 |
3376 | @@ -1,51 +0,0 @@ |
3377 | -#!/usr/bin/python |
3378 | -# Copyright 2012 Canonical Ltd. This software is licensed under the |
3379 | -# GNU Affero General Public License version 3 (see the file LICENSE). |
3380 | - |
3381 | -from helpers import ( |
3382 | - command, |
3383 | - make_charm_config_file, |
3384 | - unit_info, |
3385 | - wait_for_page_contents, |
3386 | - ) |
3387 | -import unittest |
3388 | - |
3389 | -juju = command('juju') |
3390 | - |
3391 | - |
3392 | -class TestCharm(unittest.TestCase): |
3393 | - |
3394 | - def tearDown(self): |
3395 | - juju('destroy-service', 'postgresql') |
3396 | - juju('destroy-service', 'python-django') |
3397 | - juju('destroy-service', 'gunicorn') |
3398 | - |
3399 | - def deploy(self, charm_config=None): |
3400 | - if charm_config is not None: |
3401 | - charm_config_file = make_charm_config_file(charm_config) |
3402 | - juju('deploy', 'postgresql') |
3403 | - juju('deploy', '--config=' + charm_config_file.name, 'python-django') |
3404 | - juju('deploy', 'gunicorn') |
3405 | - juju('add-relation', 'python-django:db', 'postgresql:db') |
3406 | - juju('add-relation', 'python-django', 'gunicorn') |
3407 | - |
3408 | - |
3409 | - def expose_and_check_page(self): |
3410 | - juju('expose', 'python-django') |
3411 | - addr = unit_info('python-django', 'public-address') |
3412 | - url = 'http://{}:8080/admin/'.format(addr) |
3413 | - wait_for_page_contents(url, 'Administration de Django', timeout=1000) |
3414 | - |
3415 | - def get_config(self): |
3416 | - return { |
3417 | - 'site_secret_key': 'abcdefghijklmmnopqrstuvwxyz' |
3418 | - } |
3419 | - |
3420 | - def test_port_opened(self): |
3421 | - # Deploying a buildbot master should result in it opening a port and |
3422 | - # serving its status via HTTP. |
3423 | - self.deploy(self.get_config()) |
3424 | - self.expose_and_check_page() |
3425 | - |
3426 | -if __name__ == '__main__': |
3427 | - unittest.main() |
3428 | |
3429 | === removed file 'tests/10-bundle-test.py' |
3430 | --- tests/10-bundle-test.py 2014-10-31 15:25:34 +0000 |
3431 | +++ tests/10-bundle-test.py 1970-01-01 00:00:00 +0000 |
3432 | @@ -1,33 +0,0 @@ |
3433 | -#!/usr/bin/env python3 |
3434 | - |
3435 | -# This amulet test deploys the bundles.yaml file in this directory. |
3436 | - |
3437 | -import os |
3438 | -import unittest |
3439 | -import yaml |
3440 | -import amulet |
3441 | - |
3442 | -seconds_to_wait = 600 |
3443 | - |
3444 | - |
3445 | -class BundleTest(unittest.TestCase): |
3446 | - """ Create a class for testing the charm in the unit test framework. """ |
3447 | - @classmethod |
3448 | - def setUpClass(cls): |
3449 | - """ Set up an amulet deployment using the bundle. """ |
3450 | - d = amulet.Deployment() |
3451 | - bundle_path = os.path.join(os.path.dirname(__file__), 'bundles.yaml') |
3452 | - with open(bundle_path, 'r') as bundle_file: |
3453 | - contents = yaml.safe_load(bundle_file) |
3454 | - d.load(contents) |
3455 | - d.setup(seconds_to_wait) |
3456 | - d.sentry.wait(seconds_to_wait) |
3457 | - cls.d = d |
3458 | - |
3459 | - def test_deployed(self): |
3460 | - """ Test to see if the bundle deployed successfully. """ |
3461 | - self.assertTrue(self.d.deployed) |
3462 | - |
3463 | - |
3464 | -if __name__ == '__main__': |
3465 | - unittest.main() |
3466 | \ No newline at end of file |
3467 | |
3468 | === added file 'tests/10-mysql' |
3469 | --- tests/10-mysql 1970-01-01 00:00:00 +0000 |
3470 | +++ tests/10-mysql 2014-11-19 17:44:21 +0000 |
3471 | @@ -0,0 +1,58 @@ |
3472 | +#!/usr/bin/python3 |
3473 | +""" |
3474 | +This test creates a real deployment, and runs some checks against it. |
3475 | + |
3476 | +FIXME: revert to using ssh -q, stderr=STDOUT instead of 2>&1, stderr=PIPE once |
3477 | + lp:1281577 is addressed. |
3478 | +""" |
3479 | + |
3480 | +import logging |
3481 | +import unittest |
3482 | +import jujulib.deployer |
3483 | + |
3484 | +from os import getenv |
3485 | +from os.path import dirname, abspath, join |
3486 | +from subprocess import check_output, STDOUT |
3487 | + |
3488 | +from helpers import (check_url, juju_status, get_service_config, |
3489 | + find_address, get_service_conf, BaseTests) |
3490 | + |
3491 | +log = logging.getLogger(__file__) |
3492 | + |
3493 | + |
3494 | +def setUpModule(): |
3495 | + """Deploys Landscape via the charm. All the tests use this deployment.""" |
3496 | + deployer = jujulib.deployer.Deployer() |
3497 | + config_file = join( |
3498 | + dirname(dirname(abspath(__file__))), |
3499 | + "tests", "config", "django.yaml") |
3500 | + deployer.deploy(getenv("DEPLOYER_TARGET", "django-mysql"), [config_file], |
3501 | + timeout=2000) |
3502 | + |
3503 | + frontend = find_address(juju_status(), "python-django") |
3504 | + good_content = "Welcome to Django" |
3505 | + log.info("Polling. Waiting for app server: {}".format(frontend)) |
3506 | + check_url("http://{}:8080/".format(frontend), good_content, interval=30, |
3507 | + attempts=10, retry_unavailable=True) |
3508 | + |
3509 | + |
3510 | +class MysqlServiceTests(BaseTests): |
3511 | + @classmethod |
3512 | + def setUpClass(cls): |
3513 | + """Prepares juju_status which many tests use.""" |
3514 | + cls.juju_status = juju_status() |
3515 | + cls.frontend = find_address(cls.juju_status, "python-django") |
3516 | + |
3517 | + def test_ssh(self): |
3518 | + good_content = "mysql" |
3519 | + output = check_output(["juju", "ssh", "python-django/0", |
3520 | + "sudo", "-u", "www-data", |
3521 | + "cat", "/srv/python_django/juju_settings/60-mysql.py"], |
3522 | + stderr=STDOUT).decode("utf-8") |
3523 | + self.assertIn(good_content, output, msg=output) |
3524 | + |
3525 | + |
3526 | +if __name__ == "__main__": |
3527 | + logging.basicConfig( |
3528 | + level='DEBUG', format='%(asctime)s %(levelname)s %(message)s') |
3529 | + unittest.main(verbosity=2) |
3530 | |
3531 | === added file 'tests/10-postgresql' |
3532 | --- tests/10-postgresql 1970-01-01 00:00:00 +0000 |
3533 | +++ tests/10-postgresql 2014-11-19 17:44:21 +0000 |
3534 | @@ -0,0 +1,58 @@ |
3535 | +#!/usr/bin/python3 |
3536 | +""" |
3537 | +This test creates a real deployment, and runs some checks against it. |
3538 | + |
3539 | +FIXME: revert to using ssh -q, stderr=STDOUT instead of 2>&1, stderr=PIPE once |
3540 | + lp:1281577 is addressed. |
3541 | +""" |
3542 | + |
3543 | +import logging |
3544 | +import unittest |
3545 | +import jujulib.deployer |
3546 | + |
3547 | +from os import getenv |
3548 | +from os.path import dirname, abspath, join |
3549 | +from subprocess import check_output, STDOUT |
3550 | + |
3551 | +from helpers import (check_url, juju_status, get_service_config, |
3552 | + find_address, get_service_conf, BaseTests) |
3553 | + |
3554 | +log = logging.getLogger(__file__) |
3555 | + |
3556 | + |
3557 | +def setUpModule(): |
3558 | + """Deploys Landscape via the charm. All the tests use this deployment.""" |
3559 | + deployer = jujulib.deployer.Deployer() |
3560 | + config_file = join( |
3561 | + dirname(dirname(abspath(__file__))), |
3562 | + "tests", "config", "django.yaml") |
3563 | + deployer.deploy(getenv("DEPLOYER_TARGET", "django-postgresql"), [config_file], |
3564 | + timeout=2000) |
3565 | + |
3566 | + frontend = find_address(juju_status(), "python-django") |
3567 | + good_content = "Welcome to Django" |
3568 | + log.info("Polling. Waiting for app server: {}".format(frontend)) |
3569 | + check_url("http://{}:8080/".format(frontend), good_content, interval=30, |
3570 | + attempts=10, retry_unavailable=True) |
3571 | + |
3572 | + |
3573 | +class PostgresqlServiceTests(BaseTests): |
3574 | + @classmethod |
3575 | + def setUpClass(cls): |
3576 | + """Prepares juju_status which many tests use.""" |
3577 | + cls.juju_status = juju_status() |
3578 | + cls.frontend = find_address(cls.juju_status, "python-django") |
3579 | + |
3580 | + def test_ssh(self): |
3581 | + good_content = "psycopg2" |
3582 | + output = check_output(["juju", "ssh", "python-django/0", |
3583 | + "sudo", "-u", "www-data", |
3584 | + "cat", "/srv/python_django/juju_settings/60-pgsql.py"], |
3585 | + stderr=STDOUT).decode("utf-8") |
3586 | + self.assertIn(good_content, output, msg=output) |
3587 | + |
3588 | + |
3589 | +if __name__ == "__main__": |
3590 | + logging.basicConfig( |
3591 | + level='DEBUG', format='%(asctime)s %(levelname)s %(message)s') |
3592 | + unittest.main(verbosity=2) |
3593 | |
3594 | === removed file 'tests/bundles.yaml' |
3595 | --- tests/bundles.yaml 2014-10-31 15:25:34 +0000 |
3596 | +++ tests/bundles.yaml 1970-01-01 00:00:00 +0000 |
3597 | @@ -1,30 +0,0 @@ |
3598 | -django-test: |
3599 | - services: |
3600 | - "python-django": |
3601 | - charm: "cs:precise/python-django-9" |
3602 | - num_units: 1 |
3603 | - annotations: |
3604 | - "gui-x": "300" |
3605 | - "gui-y": "300" |
3606 | - gunicorn: |
3607 | - charm: "cs:precise/gunicorn-12" |
3608 | - options: |
3609 | - postgresql: |
3610 | - charm: "cs:precise/postgresql-78" |
3611 | - num_units: 1 |
3612 | - options: |
3613 | - annotations: |
3614 | - "gui-x": "657" |
3615 | - "gui-y": "300" |
3616 | - haproxy: |
3617 | - charm: "cs:precise/haproxy-35" |
3618 | - num_units: 1 |
3619 | - annotations: |
3620 | - "gui-x": "301" |
3621 | - "gui-y": "19" |
3622 | - relations: |
3623 | - - - "python-django:pgsql" |
3624 | - - "postgresql:db" |
3625 | - - - "python-django:wsgi" |
3626 | - - "gunicorn:wsgi-file" |
3627 | - series: trusty |
3628 | |
3629 | === added directory 'tests/config' |
3630 | === added file 'tests/config/django.yaml' |
3631 | --- tests/config/django.yaml 1970-01-01 00:00:00 +0000 |
3632 | +++ tests/config/django.yaml 2014-11-19 17:44:21 +0000 |
3633 | @@ -0,0 +1,98 @@ |
3634 | +django-postgresql: |
3635 | + series: precise |
3636 | + services: |
3637 | + postgresql: |
3638 | + charm: "cs:precise/postgresql" |
3639 | + options: |
3640 | + performance_tuning: "manual" |
3641 | + shared_buffers: "15MB" |
3642 | + effective_cache_size: "5MB" |
3643 | + gunicorn: |
3644 | + charm: "cs:precise/gunicorn" |
3645 | + python-django: |
3646 | + branch: "lp:charmers/precise/python-django" |
3647 | + charm: python-django |
3648 | + num_units: 1 |
3649 | + options: |
3650 | + django_debug: True |
3651 | + expose: true |
3652 | + relations: |
3653 | + - - "python-django:wsgi" |
3654 | + - "gunicorn:wsgi-file" |
3655 | + - - "python-django" |
3656 | + - "postgresql:db" |
3657 | + |
3658 | +django-mysql: |
3659 | + series: precise |
3660 | + services: |
3661 | + mysql: |
3662 | + charm: "cs:precise/mysql" |
3663 | + options: |
3664 | + dataset-size: "100M" |
3665 | + gunicorn: |
3666 | + charm: "cs:precise/gunicorn" |
3667 | + python-django: |
3668 | + branch: "lp:charmers/precise/python-django" |
3669 | + charm: python-django |
3670 | + num_units: 1 |
3671 | + options: |
3672 | + django_debug: True |
3673 | + expose: true |
3674 | + relations: |
3675 | + - - "python-django:wsgi" |
3676 | + - "gunicorn:wsgi-file" |
3677 | + - - "python-django" |
3678 | + - "mysql:db" |
3679 | + |
3680 | +djangodistro: |
3681 | + series: precise |
3682 | + services: |
3683 | + gunicorn: |
3684 | + charm: "cs:precise/gunicorn" |
3685 | + djangodistro: |
3686 | + branch: "lp:charmers/precise/python-django" |
3687 | + charm: python-django |
3688 | + num_units: 1 |
3689 | + options: |
3690 | + django_version: 'distro' |
3691 | + django_debug: True |
3692 | + expose: true |
3693 | + relations: |
3694 | + - - "djangodistro:wsgi" |
3695 | + - "gunicorn:wsgi-file" |
3696 | + |
3697 | +django13: |
3698 | + series: precise |
3699 | + services: |
3700 | + gunicorn: |
3701 | + charm: "cs:precise/gunicorn" |
3702 | + django13: |
3703 | + branch: "lp:charmers/precise/python-django" |
3704 | + charm: python-django |
3705 | + num_units: 1 |
3706 | + options: |
3707 | + django_version: 'django>=1.3,<1.4' |
3708 | + pip_extra_args: "-U -i http://10.0.3.1:3141/root/pypi/" |
3709 | + django_debug: True |
3710 | + expose: true |
3711 | + relations: |
3712 | + - - "django13:wsgi" |
3713 | + - "gunicorn:wsgi-file" |
3714 | + |
3715 | +django14: |
3716 | + series: precise |
3717 | + services: |
3718 | + gunicorn: |
3719 | + charm: "cs:precise/gunicorn" |
3720 | + django14: |
3721 | + branch: "lp:charmers/precise/python-django" |
3722 | + charm: python-django |
3723 | + num_units: 1 |
3724 | + options: |
3725 | + django_version: 'django>=1.4,<1.5' |
3726 | + pip_extra_args: "-U -i http://10.0.3.1:3141/root/pypi/" |
3727 | + django_debug: True |
3728 | + expose: true |
3729 | + relations: |
3730 | + - - "django14:wsgi" |
3731 | + - "gunicorn:wsgi-file" |
3732 | |
3733 | === added directory 'tests/helpers' |
3734 | === removed file 'tests/helpers.py' |
3735 | --- tests/helpers.py 2013-03-20 18:11:56 +0000 |
3736 | +++ tests/helpers.py 1970-01-01 00:00:00 +0000 |
3737 | @@ -1,278 +0,0 @@ |
3738 | -# Copyright 2012 Canonical Ltd. This software is licensed under the |
3739 | -# GNU Affero General Public License version 3 (see the file LICENSE). |
3740 | - |
3741 | -"""Helper functions for writing Juju charms in Python.""" |
3742 | - |
3743 | -__metaclass__ = type |
3744 | -__all__ = [ |
3745 | - 'get_config', |
3746 | - 'log', |
3747 | - 'log_entry', |
3748 | - 'log_exit', |
3749 | - 'relation_get', |
3750 | - 'relation_set', |
3751 | - 'relation_ids', |
3752 | - 'relation_list', |
3753 | - 'config_get', |
3754 | - 'unit_get', |
3755 | - 'open_port', |
3756 | - 'close_port', |
3757 | - 'service_control', |
3758 | - 'unit_info', |
3759 | - 'wait_for_machine', |
3760 | - 'wait_for_page_contents', |
3761 | - 'wait_for_relation', |
3762 | - 'wait_for_unit', |
3763 | - ] |
3764 | - |
3765 | -from collections import namedtuple |
3766 | -import json |
3767 | -import operator |
3768 | -from shelltoolbox import ( |
3769 | - command, |
3770 | - script_name, |
3771 | - run |
3772 | - ) |
3773 | -import tempfile |
3774 | -import time |
3775 | -import urllib2 |
3776 | -import yaml |
3777 | -from subprocess import CalledProcessError |
3778 | - |
3779 | - |
3780 | -SLEEP_AMOUNT = 0.1 |
3781 | -Env = namedtuple('Env', 'uid gid home') |
3782 | -# We create a juju_status Command here because it makes testing much, |
3783 | -# much easier. |
3784 | -juju_status = lambda: command('juju')('status') |
3785 | - |
3786 | - |
3787 | -def log(message, juju_log=command('juju-log')): |
3788 | - return juju_log('--', message) |
3789 | - |
3790 | - |
3791 | -def log_entry(): |
3792 | - log("--> Entering {}".format(script_name())) |
3793 | - |
3794 | - |
3795 | -def log_exit(): |
3796 | - log("<-- Exiting {}".format(script_name())) |
3797 | - |
3798 | - |
3799 | -def get_config(): |
3800 | - _config_get = command('config-get', '--format=json') |
3801 | - return json.loads(_config_get()) |
3802 | - |
3803 | - |
3804 | -def relation_get(attribute=None, unit=None, rid=None): |
3805 | - cmd = command('relation-get') |
3806 | - if attribute is None and unit is None and rid is None: |
3807 | - return cmd().strip() |
3808 | - _args = [] |
3809 | - if rid: |
3810 | - _args.append('-r') |
3811 | - _args.append(rid) |
3812 | - if attribute is not None: |
3813 | - _args.append(attribute) |
3814 | - if unit: |
3815 | - _args.append(unit) |
3816 | - return cmd(*_args).strip() |
3817 | - |
3818 | - |
3819 | -def relation_set(**kwargs): |
3820 | - cmd = command('relation-set') |
3821 | - args = ['{}={}'.format(k, v) for k, v in kwargs.items()] |
3822 | - cmd(*args) |
3823 | - |
3824 | - |
3825 | -def relation_ids(relation_name): |
3826 | - cmd = command('relation-ids') |
3827 | - args = [relation_name] |
3828 | - return cmd(*args).split() |
3829 | - |
3830 | - |
3831 | -def relation_list(rid=None): |
3832 | - cmd = command('relation-list') |
3833 | - args = [] |
3834 | - if rid: |
3835 | - args.append('-r') |
3836 | - args.append(rid) |
3837 | - return cmd(*args).split() |
3838 | - |
3839 | - |
3840 | -def config_get(attribute): |
3841 | - cmd = command('config-get') |
3842 | - args = [attribute] |
3843 | - return cmd(*args).strip() |
3844 | - |
3845 | - |
3846 | -def unit_get(attribute): |
3847 | - cmd = command('unit-get') |
3848 | - args = [attribute] |
3849 | - return cmd(*args).strip() |
3850 | - |
3851 | - |
3852 | -def open_port(port, protocol="TCP"): |
3853 | - cmd = command('open-port') |
3854 | - args = ['{}/{}'.format(port, protocol)] |
3855 | - cmd(*args) |
3856 | - |
3857 | - |
3858 | -def close_port(port, protocol="TCP"): |
3859 | - cmd = command('close-port') |
3860 | - args = ['{}/{}'.format(port, protocol)] |
3861 | - cmd(*args) |
3862 | - |
3863 | -START = "start" |
3864 | -RESTART = "restart" |
3865 | -STOP = "stop" |
3866 | -RELOAD = "reload" |
3867 | - |
3868 | - |
3869 | -def service_control(service_name, action): |
3870 | - cmd = command('service') |
3871 | - args = [service_name, action] |
3872 | - try: |
3873 | - if action == RESTART: |
3874 | - try: |
3875 | - cmd(*args) |
3876 | - except CalledProcessError: |
3877 | - service_control(service_name, START) |
3878 | - else: |
3879 | - cmd(*args) |
3880 | - except CalledProcessError: |
3881 | - log("Failed to perform {} on service {}".format(action, service_name)) |
3882 | - |
3883 | - |
3884 | -def configure_source(update=False): |
3885 | - source = config_get('source') |
3886 | - if (source.startswith('ppa:') or |
3887 | - source.startswith('cloud:') or |
3888 | - source.startswith('http:')): |
3889 | - run('add-apt-repository', source) |
3890 | - if source.startswith("http:"): |
3891 | - run('apt-key', 'import', config_get('key')) |
3892 | - if update: |
3893 | - run('apt-get', 'update') |
3894 | - |
3895 | - |
3896 | -def make_charm_config_file(charm_config): |
3897 | - charm_config_file = tempfile.NamedTemporaryFile() |
3898 | - charm_config_file.write(yaml.dump(charm_config)) |
3899 | - charm_config_file.flush() |
3900 | - # The NamedTemporaryFile instance is returned instead of just the name |
3901 | - # because we want to take advantage of garbage collection-triggered |
3902 | - # deletion of the temp file when it goes out of scope in the caller. |
3903 | - return charm_config_file |
3904 | - |
3905 | - |
3906 | -def unit_info(service_name, item_name, data=None, unit=None): |
3907 | - if data is None: |
3908 | - data = yaml.safe_load(juju_status()) |
3909 | - service = data['services'].get(service_name) |
3910 | - if service is None: |
3911 | - # XXX 2012-02-08 gmb: |
3912 | - # This allows us to cope with the race condition that we |
3913 | - # have between deploying a service and having it come up in |
3914 | - # `juju status`. We could probably do with cleaning it up so |
3915 | - # that it fails a bit more noisily after a while. |
3916 | - return '' |
3917 | - units = service['units'] |
3918 | - if unit is not None: |
3919 | - item = units[unit][item_name] |
3920 | - else: |
3921 | - # It might seem odd to sort the units here, but we do it to |
3922 | - # ensure that when no unit is specified, the first unit for the |
3923 | - # service (or at least the one with the lowest number) is the |
3924 | - # one whose data gets returned. |
3925 | - sorted_unit_names = sorted(units.keys()) |
3926 | - item = units[sorted_unit_names[0]][item_name] |
3927 | - return item |
3928 | - |
3929 | - |
3930 | -def get_machine_data(): |
3931 | - return yaml.safe_load(juju_status())['machines'] |
3932 | - |
3933 | - |
3934 | -def wait_for_machine(num_machines=1, timeout=300): |
3935 | - """Wait `timeout` seconds for `num_machines` machines to come up. |
3936 | - |
3937 | - This wait_for... function can be called by other wait_for functions |
3938 | - whose timeouts might be too short in situations where only a bare |
3939 | - Juju setup has been bootstrapped. |
3940 | - |
3941 | - :return: A tuple of (num_machines, time_taken). This is used for |
3942 | - testing. |
3943 | - """ |
3944 | - # You may think this is a hack, and you'd be right. The easiest way |
3945 | - # to tell what environment we're working in (LXC vs EC2) is to check |
3946 | - # the dns-name of the first machine. If it's localhost we're in LXC |
3947 | - # and we can just return here. |
3948 | - if get_machine_data()[0]['dns-name'] == 'localhost': |
3949 | - return 1, 0 |
3950 | - start_time = time.time() |
3951 | - while True: |
3952 | - # Drop the first machine, since it's the Zookeeper and that's |
3953 | - # not a machine that we need to wait for. This will only work |
3954 | - # for EC2 environments, which is why we return early above if |
3955 | - # we're in LXC. |
3956 | - machine_data = get_machine_data() |
3957 | - non_zookeeper_machines = [ |
3958 | - machine_data[key] for key in machine_data.keys()[1:]] |
3959 | - if len(non_zookeeper_machines) >= num_machines: |
3960 | - all_machines_running = True |
3961 | - for machine in non_zookeeper_machines: |
3962 | - if machine.get('instance-state') != 'running': |
3963 | - all_machines_running = False |
3964 | - break |
3965 | - if all_machines_running: |
3966 | - break |
3967 | - if time.time() - start_time >= timeout: |
3968 | - raise RuntimeError('timeout waiting for service to start') |
3969 | - time.sleep(SLEEP_AMOUNT) |
3970 | - return num_machines, time.time() - start_time |
3971 | - |
3972 | - |
3973 | -def wait_for_unit(service_name, timeout=480): |
3974 | - """Wait `timeout` seconds for a given service name to come up.""" |
3975 | - wait_for_machine(num_machines=1) |
3976 | - start_time = time.time() |
3977 | - while True: |
3978 | - state = unit_info(service_name, 'agent-state') |
3979 | - if 'error' in state or state == 'started': |
3980 | - break |
3981 | - if time.time() - start_time >= timeout: |
3982 | - raise RuntimeError('timeout waiting for service to start') |
3983 | - time.sleep(SLEEP_AMOUNT) |
3984 | - if state != 'started': |
3985 | - raise RuntimeError('unit did not start, agent-state: ' + state) |
3986 | - |
3987 | - |
3988 | -def wait_for_relation(service_name, relation_name, timeout=120): |
3989 | - """Wait `timeout` seconds for a given relation to come up.""" |
3990 | - start_time = time.time() |
3991 | - while True: |
3992 | - relation = unit_info(service_name, 'relations').get(relation_name) |
3993 | - if relation is not None and relation['state'] == 'up': |
3994 | - break |
3995 | - if time.time() - start_time >= timeout: |
3996 | - raise RuntimeError('timeout waiting for relation to be up') |
3997 | - time.sleep(SLEEP_AMOUNT) |
3998 | - |
3999 | - |
4000 | -def wait_for_page_contents(url, contents, timeout=120, validate=None): |
4001 | - if validate is None: |
4002 | - validate = operator.contains |
4003 | - start_time = time.time() |
4004 | - while True: |
4005 | - try: |
4006 | - stream = urllib2.urlopen(url) |
4007 | - except (urllib2.HTTPError, urllib2.URLError): |
4008 | - pass |
4009 | - else: |
4010 | - page = stream.read() |
4011 | - if validate(page, contents): |
4012 | - return page |
4013 | - if time.time() - start_time >= timeout: |
4014 | - raise RuntimeError('timeout waiting for contents of ' + url) |
4015 | - time.sleep(SLEEP_AMOUNT) |
4016 | |
4017 | === added file 'tests/helpers/__init__.py' |
4018 | --- tests/helpers/__init__.py 1970-01-01 00:00:00 +0000 |
4019 | +++ tests/helpers/__init__.py 2014-11-19 17:44:21 +0000 |
4020 | @@ -0,0 +1,136 @@ |
4021 | +import json |
4022 | +import logging |
4023 | +import sys |
4024 | +import yaml |
4025 | +import unittest |
4026 | + |
4027 | +from os import getenv |
4028 | +from os.path import splitext, basename |
4029 | +from subprocess import check_output, CalledProcessError, PIPE |
4030 | +from time import sleep |
4031 | + |
4032 | +log = logging.getLogger(__file__) |
4033 | + |
4034 | + |
4035 | +@unittest.skipIf( |
4036 | + getenv("SKIP_TESTS", None), "Requested to skip all tests.") |
4037 | +class BaseTests(unittest.TestCase): |
4038 | + """ |
4039 | + Base class with some commonality between all test classes. |
4040 | + """ |
4041 | + |
4042 | + maxDiff = None |
4043 | + |
4044 | + def __str__(self): |
4045 | + file_name = splitext(basename(__file__))[0] |
4046 | + return "{} ({}.{})".format( |
4047 | + self._testMethodName, file_name, self.__class__.__name__) |
4048 | + |
4049 | + |
4050 | +def check_url(url, good_content, post_data=None, header=None, |
4051 | + interval=5, attempts=2, retry_unavailable=False): |
4052 | + """ |
4053 | + Polls the given URL looking for the specified good_content. If |
4054 | + not found in the timeout period, will assert. If found, returns |
4055 | + the output matching. |
4056 | + |
4057 | + @param url: URL to poll |
4058 | + @param good_content: string we are looking for, or list of strings |
4059 | + @param post_data: optional POST data string |
4060 | + @param header: optional request header string |
4061 | + @param interval: number of seconds between polls |
4062 | + @param attempts: how many times we should poll |
4063 | + @param retry_unavailable: if host is unavailable, retry (default: False) |
4064 | + """ |
4065 | + output = "" |
4066 | + if type(good_content) is not list: |
4067 | + good_content = [good_content] |
4068 | + cmd = ["curl", url, "-k", "-L", "-s"] |
4069 | + if post_data: |
4070 | + cmd.extend(["-d", post_data]) |
4071 | + if header: |
4072 | + cmd.extend(["-H", header]) |
4073 | + for _ in range(attempts): |
4074 | + try: |
4075 | + output = check_output(cmd).decode("utf-8").strip() |
4076 | + except CalledProcessError as e: |
4077 | + if not retry_unavailable: |
4078 | + raise |
4079 | + status = e.returncode |
4080 | + # curl: rc=7, host is unavailable, this can happen |
4081 | + # when apache is being restarted, for instance |
4082 | + if status == 7: |
4083 | + log.info("Unavailable, retrying: {}".format(url)) |
4084 | + else: |
4085 | + raise |
4086 | + if all(content in output for content in good_content): |
4087 | + return output |
4088 | + sys.stdout.write(".") |
4089 | + sys.stdout.flush() |
4090 | + sleep(interval) |
4091 | + msg = """Content Not found! |
4092 | + url:{} |
4093 | + good_content:{} |
4094 | + output:{} |
4095 | + """ |
4096 | + raise AssertionError(msg.format(url, good_content, output)) |
4097 | + |
4098 | + |
4099 | +def juju_status(): |
4100 | + """Return a juju status structure.""" |
4101 | + cmd = ["juju", "status", "--format=json"] |
4102 | + output = check_output(cmd).decode("utf-8").strip() |
4103 | + return json.loads(output) |
4104 | + |
4105 | + |
4106 | +def get_service_config(service_name): |
4107 | + """ |
4108 | + Returns the configuration of the given service. Raises an error if |
4109 | + the service is not there. |
4110 | + |
4111 | + @param juju_status: dictionary representing the juju status output. |
4112 | + @param service_name: string representing the service we are looking for. |
4113 | + """ |
4114 | + cmd = ["juju", "get", "--format=yaml", service_name] |
4115 | + output = check_output(cmd).decode("utf-8").strip() |
4116 | + return yaml.load(output) |
4117 | + |
4118 | + |
4119 | +def find_address(juju_status, service_name): |
4120 | + """ |
4121 | + Find the first unit of service_name in the given juju status dictionary. |
4122 | + Doesn't handle subordinates, sorry. |
4123 | + |
4124 | + @param juju_status: dictionary representing the juju status output. |
4125 | + @param service_name: String representing the name of the service. |
4126 | + """ |
4127 | + services = juju_status["services"] |
4128 | + if service_name not in services: |
4129 | + raise ServiceOrUnitNotFound(service_name) |
4130 | + service = services[service_name] |
4131 | + units = service.get("units", {}) |
4132 | + unit_keys = list(sorted(units.keys())) |
4133 | + if unit_keys: |
4134 | + public_address = units[unit_keys[0]].get("public-address", "") |
4135 | + return public_address |
4136 | + else: |
4137 | + raise ServiceOrUnitNotFound(service_name) |
4138 | + |
4139 | + |
4140 | +def get_service_conf(unit, path): |
4141 | + """Fetch the contents of service.conf from the given unit.""" |
4142 | + cmd = ["juju", "ssh", unit, "sudo cat %s 2>/dev/null ".format(path)] |
4143 | + output = check_output(cmd, stderr=PIPE).decode("utf-8").strip() |
4144 | + return output |
4145 | + |
4146 | + |
4147 | +class ServiceOrUnitNotFound(Exception): |
4148 | + """ |
4149 | + Exception thrown if a service cannot be found in the deployment or has |
4150 | + no units. |
4151 | + """ |
4152 | + |
4153 | + def __init__(self, service_name): |
4154 | + self.service_name = service_name |
4155 | + |
4156 | + |
4157 | |
4158 | === added directory 'tests/jujulib' |
4159 | === added file 'tests/jujulib/__init__.py' |
4160 | === added file 'tests/jujulib/deployer.py' |
4161 | --- tests/jujulib/deployer.py 1970-01-01 00:00:00 +0000 |
4162 | +++ tests/jujulib/deployer.py 2014-11-19 17:44:21 +0000 |
4163 | @@ -0,0 +1,45 @@ |
4164 | +from os import path |
4165 | +import tempfile |
4166 | +import shutil |
4167 | +import logging |
4168 | +import subprocess |
4169 | + |
4170 | + |
4171 | +class Deployer(object): |
4172 | + """ |
4173 | + Simple wrapper around juju-deployer. It's designed to copy the current |
4174 | + charm branch in place in a staging directory where juju-deployer will be |
4175 | + called. Juju-deployer will then use that when references to "lp:<charm>" |
4176 | + is used. |
4177 | + """ |
4178 | + |
4179 | + def _stage_deployer_dir(self, deployer_dir, series): |
4180 | + """Stage the directory for calling deployer.""" |
4181 | + charm_src = path.dirname(path.dirname(path.dirname(__file__))) |
4182 | + charm_dest = path.join(deployer_dir, series, "python-django") |
4183 | + shutil.copytree(charm_src, charm_dest, ignore=shutil.ignore_patterns(".venv")) |
4184 | + |
4185 | + def deploy(self, target, config_files, timeout=None): |
4186 | + """ |
4187 | + Use juju-deployer to install `target` on current `juju env` |
4188 | + |
4189 | + @param target: target to deploye in the config file. |
4190 | + @param config_files: list of config files to pass to deployer (-c) |
4191 | + @param timeout: timeout in seconds (int or string is OK) |
4192 | + """ |
4193 | + deployer_dir = None |
4194 | + try: |
4195 | + deployer_dir = tempfile.mkdtemp() |
4196 | + for series in ["precise", "trusty"]: |
4197 | + self._stage_deployer_dir(deployer_dir, series) |
4198 | + args = ["juju-deployer", "-vdWL"] |
4199 | + for config_file in config_files: |
4200 | + args.extend(["-c", config_file]) |
4201 | + args.append(target) |
4202 | + if timeout is not None: |
4203 | + args.extend(["--timeout", str(timeout)]) |
4204 | + logging.info("(cwd=%s) RUN: %s" % (deployer_dir, args)) |
4205 | + subprocess.check_call(args, cwd=deployer_dir) |
4206 | + finally: |
4207 | + if deployer_dir is not None: |
4208 | + shutil.rmtree(deployer_dir) |
Patrick,
I like that since you're moving to a python based charm you've gut the ansible bits in this merge proposal, so we have a unified source path. This is a really hefty merge, and will be reviewed as a new charm submission.
I have a question though, if the ansible version was that problematic why are we using pure python as only a stop gap? why not continue along the path with a python charm, as that is our recommended technology to use when writing charms.