- Notifications
You must be signed in to change notification settings - Fork 537
Pytest with 89% coverage #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
b2f91f2 c6e648f a31d3c2 f8e822c 7d9c5e7 709d8cb 83ecc6d 64cf2fc 33f3d30 bd705ed f204e98 5aad08a a8d7301 e11b1d1 11f0652 46f297f 68d7490 67b011a 347e628 4a45135 2bc41ad 6a02db0 86418eb 286de0a 0e06129 81118f2 109fc2a e0fa14b d101e08 77037cc fac003d 0097017 251af8e 84aa318 96f8b96 838550e File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| | @@ -13,15 +13,15 @@ | |
| @pytest.mark.skipif(nogo, reason="Missing modules (autograd or pymanopt)") | ||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. now it says autograd and pymanopt :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. corrected the comment top of the test file. | ||
| def test_fda(): | ||
| | ||
| n = 90 # nb samples in source and target datasets | ||
| n_samples = 90 # nb samples in source and target datasets | ||
| np.random.seed(0) | ||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. use RandomState | ||
| | ||
| # generate circle dataset | ||
| xs, ys = ot.datasets.get_data_classif('gaussrot', n) | ||
| # generate gaussian dataset | ||
| xs, ys = ot.datasets.get_data_classif('gaussrot', n_samples) | ||
| | ||
| nbnoise = 8 | ||
| n_features_noise = 8 | ||
| | ||
| xs = np.hstack((xs, np.random.randn(n, nbnoise))) | ||
| xs = np.hstack((xs, np.random.randn(n_samples, n_features_noise))) | ||
| | ||
| p = 1 | ||
| | ||
| | @@ -35,20 +35,15 @@ def test_fda(): | |
| @pytest.mark.skipif(nogo, reason="Missing modules (autograd or pymanopt)") | ||
| def test_wda(): | ||
| | ||
| n = 100 # nb samples in source and target datasets | ||
| nz = 0.2 | ||
| n_samples = 100 # nb samples in source and target datasets | ||
| np.random.seed(0) | ||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. RandomState | ||
| | ||
| # generate circle dataset | ||
| t = np.random.rand(n) * 2 * np.pi | ||
| ys = np.floor((np.arange(n) * 1.0 / n * 3)) + 1 | ||
| xs = np.concatenate( | ||
| (np.cos(t).reshape((-1, 1)), np.sin(t).reshape((-1, 1))), 1) | ||
| xs = xs * ys.reshape(-1, 1) + nz * np.random.randn(n, 2) | ||
| # generate gaussian dataset | ||
| xs, ys = ot.datasets.get_data_classif('gaussrot', n_samples) | ||
| | ||
| nbnoise = 8 | ||
| n_features_noise = 8 | ||
| | ||
| xs = np.hstack((xs, np.random.randn(n, nbnoise))) | ||
| xs = np.hstack((xs, np.random.randn(n_samples, n_features_noise))) | ||
| | ||
| p = 2 | ||
| | ||
| | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RandomState
the get_data_classif function should take the rng in param and use it instead of np.random.randn
see the check_random_state function in sklearn