commit | 62c13bc2f17d445e80627f02242c5a55cd8d5587 | [log] [tgz] |
---|---|---|
author | Chad Reynolds <[email protected]> | Fri Oct 27 13:58:02 2023 -0700 |
committer | Chad Reynolds <[email protected]> | Wed Nov 01 14:42:09 2023 -0700 |
tree | b0ea39105ffdfa19016e3087062e1b2ec414529a | |
parent | 1e07bf9a92ead2e956242287e0acb9fefee5304b [diff] |
Change fetch_cvd to only fetch single host package The previous behavior was (needlessly) fetching the host package tools for each instance. Now, only a single host package is fetched no matter the number of instances. The config processing is also updated to have `host_package_build` as a non-instance field. Both `fetch_cvd` and `cvd load` config processing were updated to still handle cases where ONLY the host package tools (no instance files) are fetched. The `cvd load` case for that cannot be properly tested until local launches are supported. In multi-fetch and when instance subdirectories are specified the host tools will now be put into its own subdirectory, adjacent to the instance directories. Additionally, the filepaths from the host tools download and extraction are no longer appended to the `fetcher_config.json` file. That file is used by `assemble_cvd` for tasks like image processing, and I was not able to find or trigger a case where any reference to the host tools in that file was required. Bug: 296623471 Test: atest -c --host --no-bazel-mode cvd_load_test Test: fetch_cvd --target_directory=fetch_test --host_package_build=aosp-main Test: # only host package is downloaded to target directory Test: fetch_cvd --target_directory=fetch_test --default_build=aosp-main Test: # host package and instance files are downloaded to target directory Test: cvd load host/cvd_test_configs/main_phone.json Test: # host package files are in new .../artifacts/host_tools directory and device successfully launches Test: cvd load host/cvd_test_configs/tm_phone-tm_watch-main_host_pkg.json Test: # host package files are in new .../artifacts/host_tools directory and devices successfully launch Change-Id: Id27ccbe3632241945f5c73e708298ca807d80510
Make sure virtualization with KVM is available.
grep -c -w "vmx\|svm" /proc/cpuinfo
This should return a non-zero value. If running on a cloud machine, this may take cloud-vendor-specific steps to enable. For Google Compute Engine specifically, see the GCE guide.
ARM specific steps:
/dev/kvm
. Note that this method can also be used to confirm support of KVM on any environment.Download, build, and install the host debian packages:
sudo apt install -y git devscripts config-package-dev debhelper-compat golang curl git clone https://github.com/google/android-cuttlefish cd android-cuttlefish for dir in base frontend; do cd $dir debuild -i -us -uc -b -d cd .. done sudo dpkg -i ./cuttlefish-base_*_*64.deb || sudo apt-get install -f sudo dpkg -i ./cuttlefish-user_*_*64.deb || sudo apt-get install -f sudo usermod -aG kvm,cvdnetwork,render $USER sudo reboot
The reboot will trigger installing additional kernel modules and applying udev rules.
Go to http://ci.android.com/
Enter a branch name. Start with aosp-main
if you don‘t know what you’re looking for
Navigate to aosp_cf_x86_64_phone
and click on userdebug
for the latest build
aosp-main-throttled
and device target aosp_cf_arm64_only_phone-trunk_staging-userdebug
Click on Artifacts
Scroll down to the OTA images. These packages look like aosp_cf_x86_64_phone-img-xxxxxx.zip
-- it will always have img
in the name. Download this file
Scroll down to cvd-host_package.tar.gz
. You should always download a host package from the same build as your images.
On your local system, combine the packages:
mkdir cf cd cf tar xvf /path/to/cvd-host_package.tar.gz unzip /path/to/aosp_cf_x86_64_phone-img-xxxxxx.zip
Launch cuttlefish with:
$ HOME=$PWD ./bin/launch_cvd
You can use adb
to debug it, just like a physical device:
$ ./bin/adb -e shell
When launching with ---start_webrtc
(the default), you can see a list of all available devices at https://localhost:8443
. For more information, see the WebRTC on Cuttlefish documentation.
You will need to stop the virtual device within the same directory as you used to launch the device.
$ HOME=$PWD ./bin/stop_cvd