This is years late, but hopefully someone finds it useful. Kubernetes expects your logs to be sent to stdout, not to a file on the file system. If you arrange for your container to do that, they'll then be available through kubectl logs.
In the case of this specific question of accessing logs on a pod's file system, the quickest method is probably to use kubectl exec (https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/) to get an interactive shell inside the pod and then inspect the logs using your normal tooling.
In the very specific outlier case that you want the logs stored on the file system, but also streamed into AKS' logs service, I can't get you the whole way but I can give you a couple of hints. I know you can configure FluentBit to stream logs straight into Azure Log Analytics (https://docs.fluentbit.io/manual/pipeline/outputs/azure); I also know that FluentBit's tail pipeline input (https://docs.fluentbit.io/manual/pipeline/inputs/tail) can be configured to search a file system. If you added FluentBit to your pod, or you run FluentBit as a daemonset on the AKS nodes and mount your logs directory onto the node using a VolumeMount, you can achieve that goal.