Skip to content

Conversation

@king-p3nguin
Copy link
Contributor

Resolve #212

(I am contributing to this project as a unitaryHACK participant)

params = K.implicit_randn(shape=[nlayers, 2])

# run only once to trigger the compilation
K.jit(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why using jit and grad here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I find that jitting the function here speeds up the path-finding time. As you pointed out, the gradient calculation was unnecessary, so I removed it.



def trigger_cotengra_optimization(n, nlayers, d):
g = nx.random_regular_graph(d, n)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please consider several different types of graph (say 1D, 2D lattice or all-to-all connection) for the circuit architectures for a better benchmark?

Copy link
Contributor Author

@king-p3nguin king-p3nguin Jun 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added graph_args to benchmark the performance for 1D, 2D lattice, and all-to-all connected graphs.

minimize="flops",
parallel=True,
max_time=30,
max_repeats=30,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_repeats is too small? maybe try 64 or 128. minimize can also be given as "combo" or "size" apart from "flops"

Copy link
Contributor Author

@king-p3nguin king-p3nguin Jun 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I increased the max_repeats and added minimize_args to benchmark various minimize arguments that cotengra supports.

@refraction-ray refraction-ray merged commit 9827b39 into tencent-quantum-lab:master Jun 3, 2024
@refraction-ray
Copy link
Contributor

thanks for the nice contribution. merged with some small tweaks:

  1. increase the max_time to 60
  2. comment most of the options by default
  3. use reuse=False in expectation calculation to directly compute the tensor network for the observable
@refraction-ray
Copy link
Contributor

I find that jitting the function here speeds up the path-finding time

this part still requires further investigation in the future as the path finding stage seems to be irrelevant from jitting things as they are implemented outside the ML backend.

@king-p3nguin king-p3nguin deleted the cotengra branch June 3, 2024 07:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants