diff --git a/docs/FAQs/General_FAQs.md b/docs/FAQs/General_FAQs.md index 398ad2a788d85a26e20849170fc2bcf2ff0e9efa..800096d5a23f1be1745c2f2b236dab044b35c73d 100644 --- a/docs/FAQs/General_FAQs.md +++ b/docs/FAQs/General_FAQs.md @@ -39,9 +39,9 @@ Yes, currently all comments need to be made in Doxygen. --- ## If I cannot solve my build problems, what is the most effective way to ask for help? -Many build problems are related to the environment settings. +Many build problems are related to the environment settings. -1. Run the script to get your environment: `bash scripts/env.sh >& env.txt` +1. Run the script to get your environment: `bash scripts/env.sh >& env.txt` 2. Post the content of env.txt to our Github issues page and someone from our team will get in touch with you. @@ -52,10 +52,10 @@ Use these ports for HMI and Dreamview: --- ## Why there is no ROS environment in dev docker? -The ROS package is downloaded when you start to build apollo: -`bash apollo.sh build`. +The ROS package is downloaded when you start to build apollo: +`bash apollo.sh build`. -1. Run the following command inside Docker to set up the ROS environment after the build is complete: +1. Run the following command inside Docker to set up the ROS environment after the build is complete: `source /apollo/scripts/apollo_base.sh` 2. Run ROS-related commands such as rosbag, rostopic and so on. diff --git a/docs/howto/how_to_build_and_release.md b/docs/howto/how_to_build_and_release.md index e0f76ca360b0554c774be15da57c6f3e7168b4b2..419d25759f4bb28edc2453142a8c0b4d380ea148 100644 --- a/docs/howto/how_to_build_and_release.md +++ b/docs/howto/how_to_build_and_release.md @@ -12,7 +12,7 @@ The system requirement for building Apollo is Ubuntu 14.04. Using a Docker conta To install docker, you may refer to [Official guide to install the Docker-ce](https://docs.docker.com/install/linux/docker-ce/ubuntu). -Don't forget to test it using +Don't forget to test it using [post-installation steps for Linux](https://docs.docker.com/install/linux/linux-postinstall). ## Build Apollo @@ -55,16 +55,16 @@ sudo dpkg -i .deb sudo apt-get install -f # Install dependencies ``` ### Start VSCode -Start VSCode with the following command: +Start VSCode with the following command: ```bash code ``` ### Open the Apollo project in VSCode -Use the keyboard shortcut **(Ctrl+K Ctrl+O)** to open the Apollo project. +Use the keyboard shortcut **(Ctrl+K Ctrl+O)** to open the Apollo project. ### Build the Apollo project in VSCode -Use the keyboard shortcut **(Ctrl+Shift+B)** to build the Apollo project. +Use the keyboard shortcut **(Ctrl+Shift+B)** to build the Apollo project. ### Run all unit tests for the Apollo project in VSCode -Select the "Tasks->Run Tasks..." menu command and click "run all unit tests for the apollo project" from a popup menu to check the code style for the Apollo project. +Select the "Tasks->Run Tasks..." menu command and click "run all unit tests for the apollo project" from a popup menu to check the code style for the Apollo project. If you are currently developing on 16.04, you will get a build error. As seen in the image below, 2 perception tests. To avoid this build error, refer to the [how to build Apollo using Ubuntu 16](how_to_run_apollo_2.5_with_ubuntu16.md). @@ -72,9 +72,9 @@ As seen in the image below, 2 perception tests. To avoid this build error, refer ![Build error](images/build_fail.png) ### Run a code style check task for the Apollo project in VSCode -Select the "Tasks->Run Tasks..." menu command and click "code style check for the apollo project" from a popup menu to check the code style for the Apollo project. +Select the "Tasks->Run Tasks..." menu command and click "code style check for the apollo project" from a popup menu to check the code style for the Apollo project. ### Clean the Apollo project in VSCode -Select the "Tasks->Run Tasks..." menu command and click "clean the apollo project" from a popup menu to clean the Apollo project. +Select the "Tasks->Run Tasks..." menu command and click "clean the apollo project" from a popup menu to clean the Apollo project. ### Change the building option You can change the "build" option to another one such as "build_gpu" (refer to the "apollo.sh" file for details) in ".vscode/tasks.json" diff --git a/docs/howto/how_to_debug_dreamview_start_problem.md b/docs/howto/how_to_debug_dreamview_start_problem.md index 2a746a981b14d4261f6e049abd79464e67c3c045..81b45f080bff81ffa0388b2fe72367541e9ea4fa 100644 --- a/docs/howto/how_to_debug_dreamview_start_problem.md +++ b/docs/howto/how_to_debug_dreamview_start_problem.md @@ -100,7 +100,7 @@ Program terminated with signal SIGILL, Illegal instruction. #9 0x0000000000000000 in ?? () (gdb) q -@in_dev_docker:/apollo$ addr2line -C -f -e /usr/local/lib/libpcl_sample_consensus.so.1.7.2 0x375bec +@in_dev_docker:/apollo$ addr2line -C -f -e /usr/local/lib/libpcl_sample_consensus.so.1.7.2 0x375bec double boost::math::detail::erf_inv_imp, boost::math::policies::promote_double, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy> >(double const&, double const&, boost::math::policies::policy, boost::math::policies::promote_double, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy> const&, mpl_::int_<64> const*) ??:? ``` @@ -118,7 +118,7 @@ apolloauto/apollo map_volume-sunnyvale_big_loop-latest 80aca30fa08a 3 apolloauto/apollo localization_volume-x86_64-latest be947abaa650 2 months ago 5.74MB apolloauto/apollo map_volume-sunnyvale_loop-latest 36dc0d1c2551 2 months ago 906MB -build cmd: +build cmd: in_dev_docker:/apollo$ ./apollo.sh build_no_perception dbg ``` 2. Compile pcl and copy the pcl library files to `/usr/local/lib`: @@ -130,7 +130,7 @@ See [/apollo/WORKSPACE.in](https://github.com/ApolloAuto/apollo/blob/master/WORK Inside docker: ``` (to keep pcl in host, we save pcl under /apollo) -cd /apollo +cd /apollo git clone https://github.com/PointCloudLibrary/pcl.git git checkout -b pcl- @@ -145,9 +145,9 @@ index f0a5600..42c182e 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -7,6 +7,15 @@ endif() - + set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "possible configurations" FORCE) - + +if (CMAKE_VERSION VERSION_LESS "3.1") +# if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU") + set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11") @@ -193,7 +193,7 @@ If CPU does not support AVX instructions, and you gdb the coredump file under /a ``` Program terminated with signal SIGILL, Illegal instruction. -#0 0x000000000112b70a in std::_Hashtable, std::__detail::_Identity, std::equal_to, google::protobuf::hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits >::_Hashtable (this=0x3640288, __bucket_hint=10, +#0 0x000000000112b70a in std::_Hashtable, std::__detail::_Identity, std::equal_to, google::protobuf::hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits >::_Hashtable (this=0x3640288, __bucket_hint=10, __h1=..., __h2=..., __h=..., __eq=..., __exk=..., __a=...) ---Type to continue, or q to quit--- at /usr/include/c++/4.8/bits/hashtable.h:828 diff --git a/docs/howto/how_to_run_map_verification_tool.md b/docs/howto/how_to_run_map_verification_tool.md index ea7af4aeed401c1ecba836edb4aa42213600185f..a712d611aa6baf4d5afb70a5710c06a97fc57dfc 100644 --- a/docs/howto/how_to_run_map_verification_tool.md +++ b/docs/howto/how_to_run_map_verification_tool.md @@ -8,7 +8,7 @@ In order to run your data on this tool, please follow the steps below: 1. Build Apollo as recommended in the [Build Guide](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md) until the `./apollo.sh build` step. 2. Once instde dev docker and after running `./apollo.sh build` please go to the folder `modules/tools/map_datachecker/` -3. Starting the server: +3. Starting the server: ```bash bash server.sh start ``` @@ -54,5 +54,5 @@ In order to run your data on this tool, please follow the steps below: ## Tips -1. The default value of `cmd` is `start` +1. The default value of `cmd` is `start` 2. All error messages will be printed to help better prepare your map data. Please follow the error messages exactly as recommended diff --git a/docs/howto/how_to_run_offline_perception_visualizer.md b/docs/howto/how_to_run_offline_perception_visualizer.md index 8e508615c6e428914de3ed6a5087e59dd04c6cdf..86909752778993d30764eab6b3adb3df552e4ecd 100644 --- a/docs/howto/how_to_run_offline_perception_visualizer.md +++ b/docs/howto/how_to_run_offline_perception_visualizer.md @@ -45,7 +45,7 @@ Before running the visualizer, you may setup the data directories and the algori ``` /apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool ``` -Now you will see a pop-up window showing the perception result with point cloud frame-by-frame. The obstacles are shown with a purple rectangle bounding boxes. There are three modes to visualize the point cloud with/without the ROI area: -* Showing all the point cloud with grey color; -* Showing the point cloud of ROI area only with green color; +Now you will see a pop-up window showing the perception result with point cloud frame-by-frame. The obstacles are shown with a purple rectangle bounding boxes. There are three modes to visualize the point cloud with/without the ROI area: +* Showing all the point cloud with grey color; +* Showing the point cloud of ROI area only with green color; * Showing the point cloud of ROI area with green color and that of other areas with grey color. You may press the `S` key on keyboard to switch the modes in turn. diff --git a/docs/quickstart/apollo_1_0_quick_start_developer.md b/docs/quickstart/apollo_1_0_quick_start_developer.md index 56bcee4c00793b00d1b63c239489fa89cce7ec3e..3db15e84ffffdce39f9f3580dc8ea125b5c107d4 100644 --- a/docs/quickstart/apollo_1_0_quick_start_developer.md +++ b/docs/quickstart/apollo_1_0_quick_start_developer.md @@ -56,7 +56,7 @@ cd install sudo ./install_kernel.sh ``` 3. Reboot your system by the `reboot` command -4. Build the ESD CAN driver source code +4. Build the ESD CAN driver source code Now you need to build the ESD CAN driver source code according to [ESDCAN-README.md](https://github.com/ApolloAuto/apollo-kernel/blob/master/linux/ESDCAN-README.md) ## Build your own kernel. diff --git a/docs/quickstart/apollo_1_5_quick_start.md b/docs/quickstart/apollo_1_5_quick_start.md index 49d0e373534595bf884127150ded997ac18bfabe..b1f72d2dcc3f6ee481efd4c66e9f8f591cddc366 100644 --- a/docs/quickstart/apollo_1_5_quick_start.md +++ b/docs/quickstart/apollo_1_5_quick_start.md @@ -13,7 +13,7 @@ This quick start focuses on Apollo 1.5 new features. For general Apollo concepts Use your favorite browser to access HMI web service in your host machine browser with URL http://localhost:8887 4. Select Vehicle and Map - + You'll be required to setup profile before doing anything else. Click the dropdown menu to select your HDMap and vehicle in use. The list are defined in [HMI config file](https://raw.githubusercontent.com/ApolloAuto/apollo/master/modules/hmi/conf/config.pb.txt). *Note: It's also possible to change profile on the right panel of HMI, but just remember to click "Reset All" on the top-right corner to restart the system.* diff --git a/docs/quickstart/apollo_2_5_technical_tutorial.md b/docs/quickstart/apollo_2_5_technical_tutorial.md index 59516b9d7f7faeaccd35586ba8934079e40a467c..0a67a929968c4ea698f9dfb4a00da058a1a55267 100644 --- a/docs/quickstart/apollo_2_5_technical_tutorial.md +++ b/docs/quickstart/apollo_2_5_technical_tutorial.md @@ -72,4 +72,4 @@ * [How to use Apollo 2.5 navigation mode](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md "[How to use Apollo 2.5 navigation mode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/dreamview_usage_table.md "Introduction of Dreamview") - + diff --git a/docs/specs/Apollo_Sensor_Unit/Apollo_Sensor_Unit_Installation_Guide.md b/docs/specs/Apollo_Sensor_Unit/Apollo_Sensor_Unit_Installation_Guide.md index fb1bf48ed9759969e7e3f7bc2452427a28818463..749758215b2d456a0991f8367fb0e9c97aeec076 100644 --- a/docs/specs/Apollo_Sensor_Unit/Apollo_Sensor_Unit_Installation_Guide.md +++ b/docs/specs/Apollo_Sensor_Unit/Apollo_Sensor_Unit_Installation_Guide.md @@ -2,9 +2,9 @@ Apollo Sensor Unit (ASU) is designed to work with Industrial PC (IPC) to implement sensor fusion, vehicle control and network access in Apollo's autonomous driving platform. -The ASU system provides sensor interfaces to collect data from various sensors, including cameras, Lidars, Radars, and Ultrasonic Sensors. The system also utilizes pulse per second (PPS) and GPRMC signals from GNSS receiver to implement data collection synchronization for the camera and LiDAR sensors. +The ASU system provides sensor interfaces to collect data from various sensors, including cameras, Lidars, Radars, and Ultrasonic Sensors. The system also utilizes pulse per second (PPS) and GPRMC signals from GNSS receiver to implement data collection synchronization for the camera and LiDAR sensors. -The communication between the ASU and the IPC is through PCI Express Interface. ASU collects sensor data and passes to IPC via PCI Express Interface, and the IPC uses the ASU to send out Vehicle Control commands in the Controller Area Network (CAN) protocol. +The communication between the ASU and the IPC is through PCI Express Interface. ASU collects sensor data and passes to IPC via PCI Express Interface, and the IPC uses the ASU to send out Vehicle Control commands in the Controller Area Network (CAN) protocol. In addition, Lidar connectivity via Ethernet, WWAN gateway via 4G LTE module, and WiFi access point via WiFi module will be enabled in the future releases. @@ -14,19 +14,19 @@ In addition, Lidar connectivity via Ethernet, WWAN gateway via 4G LTE module, an #### Front Panel Connectors -1. External GPS PPS / GPRMC Input Port +1. External GPS PPS / GPRMC Input Port 2. FAKRA Camera Data Input Port (5 ports) -3. 100 Base-TX/1000 Base-T Ethernet Port (2 Ports) -4. KL-15 (AKA Car Ignition) Signal Input Port +3. 100 Base-TX/1000 Base-T Ethernet Port (2 Ports) +4. KL-15 (AKA Car Ignition) Signal Input Port -#### Rear Panel Connectors +#### Rear Panel Connectors -1. General purpose UART port(reserved) +1. General purpose UART port(reserved) 2. External PCI Express Port (Support X4 or X8) For connections to IPC, please use EXTN port. 3. GPS PPS/GPRMC Output Rectangular Port (3 Ports) for LiDAR 4. Power and PPS/GPRMC Cylindrical Output Port for Stereo Camera/LiDAR 5. CAN Bus (4 Ports) -6. Main Power Input Connector +6. Main Power Input Connector ### Purchase Channels @@ -36,21 +36,21 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de 1. Power Cable - The main power is from vehicle battery, 9V ~ 36V, 120W. + The main power is from vehicle battery, 9V ~ 36V, 120W. ![conn-DTF13-2P](images/conn-DTF13-2P.jpeg) |MFR|MPN|Description| |---------------|--------|-----------| |TE Connectivity|DTF13-2P|DT RECP ASM| - + | PIN # | NAME | I/O | Description | | ----- | ---- | ---- | ------------------ | | 1 | 12V | PWR | 12V (9V~36V, 120W) | | 2 | GND | PWR | GROUND | -2. FPD-Link III cameras. +2. FPD-Link III cameras. There are 5 FAKRA connectors for FPD Link III cameras in ASU Front Panel labeled with 1~5, respectively, from right to left. The ASU can support up to 5 cameras by enabling Camera 1 ~ 5 whose deserializers (TI, DS90UB914ATRHSTQ1) convert FPD Link III signals into parallel data signals. @@ -71,7 +71,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de | MFR | MPN | Description | | :-------------- | --------- | ----------------------------------------- | | TE Connectivity | 1565749-1 | Automotive Connectors 025 CAP ASSY, 4 Pin | - + | PIN # | NAME | I/O | Description | | ----- | ----- | ----- | ------------------------------------------------------------ | @@ -82,14 +82,14 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de 4. GPS synchronization output channels - ASU forwards the duplicated GPS PPS/GPRMC from external GPS to the customized 8 Pin connector. This connector provides 3 sets of PPS/GPRMC output for sensors that need to be synchronized, such as LiDARs, etc. + ASU forwards the duplicated GPS PPS/GPRMC from external GPS to the customized 8 Pin connector. This connector provides 3 sets of PPS/GPRMC output for sensors that need to be synchronized, such as LiDARs, etc. ![1376350-2](images/1376350-2.jpeg) |MFR| MPN| Description| | --------------- | --------- | ------------------------------------------------- | | TE Connectivity | 1376350-2 | Automotive Connectors 025 I/O CAP HSG ASSY, 8 Pin | - + | PIN # | NAME | I/O | Description | | ----- | ------ | ------ | ------------------------------------------------------- | @@ -111,7 +111,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de | MFR | MPN | Description | | --------------- | --------- | -------------------------------------------------- | | TE Connectivity | 1318772-2 | Automotive Connectors 025 I/O CAP HSG ASSY, 12 Pin | - + | PIN # | NAME | I/O | Description | | ----- | ------ | ----- | --------------- | @@ -128,7 +128,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de | 11 | CANL-3 | INOUT | Channel 3, CANL | | 12 | GND | PWR | Ground | -6. GPS PPS / GPRMC Output Rectangular Port +6. GPS PPS / GPRMC Output Rectangular Port The Connector provides 8 ports for 3 LiDARs @@ -151,7 +151,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de | 6 | GND | PWR | Ground (ASU) -> Pin3 GND (LiDAR 1,3) | | 7 | GND | PWR | Ground (ASU) -> Pin3 GND (LiDAR 2) | | 8 | PPS | OUT | PPS (ASU) -> Pin1 GPS_PULSE_CNT (LiDAR 3)| - + 7. PPS/GPRMC Cylindrical Output Port for Stereo Camera/ LiDAR @@ -164,7 +164,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de | --------------- | --------- | -------------------------------------------------- | | Digi-Key | APC1735-ND | CONN RCPT FMALE 8POS SOLDER CUP | - + | PIN # | NAME | I/O | Description | | ----- | ------ | ----- | --------------- | @@ -174,7 +174,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de - + ## Disclaimer This device is `Apollo Platform Supported` \ No newline at end of file diff --git a/docs/specs/Camera/Leopard_Camera_LI-USB30-AZ023WDR__Installation_Guide.md b/docs/specs/Camera/Leopard_Camera_LI-USB30-AZ023WDR__Installation_Guide.md index d5fbfa26c3e70b2fc0c2e659af6b3a520db05d8b..c8d220f2db687781ea78e4d0448d8020aad1d13b 100644 --- a/docs/specs/Camera/Leopard_Camera_LI-USB30-AZ023WDR__Installation_Guide.md +++ b/docs/specs/Camera/Leopard_Camera_LI-USB30-AZ023WDR__Installation_Guide.md @@ -1,8 +1,8 @@ # Guide for LI-USB30-AZ023WDRB -The cameras used are LI-USB30-AR023ZWDR with standard USB 3.0 case manufactured by Leopard Imaging Inc. This line of product is based on AZ023Z 1080P sensor and AP0202 ISP from ON Semiconductor. It supports external trigger and software trigger. +The cameras used are LI-USB30-AR023ZWDR with standard USB 3.0 case manufactured by Leopard Imaging Inc. This line of product is based on AZ023Z 1080P sensor and AP0202 ISP from ON Semiconductor. It supports external trigger and software trigger. -We recommend using two cameras with 6 mm lens and one with 25 mm lens to achieve the required performance for traffic light detection application. +We recommend using two cameras with 6 mm lens and one with 25 mm lens to achieve the required performance for traffic light detection application. ![camera_image](images/LI-USB30-AZ023ZWDRB.png) @@ -12,7 +12,7 @@ This camera can be connected to the IPC through USB 3.0 cable for power and data You can find additional information regarding the Leopard Imaging Inc. cameras on their [website](https://leopardimaging.com/product/li-usb30-ar023zwdrb/) -* [Data Sheet](https://www.leopardimaging.com/LI-USB30-AR023ZWDRB_datasheet.pdf) +* [Data Sheet](https://www.leopardimaging.com/LI-USB30-AR023ZWDRB_datasheet.pdf) * [Trigger Cable](https://leopardimaging.com/product/li-usb3-trig_cable/) ## Disclaimer diff --git a/docs/specs/Camera/Truly_Argus_Camera_Installation_Guide.md b/docs/specs/Camera/Truly_Argus_Camera_Installation_Guide.md index bb767ceee649190be306f362b90252b48835c6af..db0300a00e7f81f5bc0891d641e5327949cb9e3f 100644 --- a/docs/specs/Camera/Truly_Argus_Camera_Installation_Guide.md +++ b/docs/specs/Camera/Truly_Argus_Camera_Installation_Guide.md @@ -1,8 +1,8 @@ # Guide for Argus Camera -Argus camera is a joint development venture product of Truly Seminconductors Ltd. and Baidu. The Argus camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on ON Semiconductor MARS. +Argus camera is a joint development venture product of Truly Seminconductors Ltd. and Baidu. The Argus camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on ON Semiconductor MARS. -We recommend using ```three cameras```, one with **6 mm** lens, one with **12 mm** lens and the last one with **2.33 mm** to achieve the required performance for the traffic light detection application. +We recommend using ```three cameras```, one with **6 mm** lens, one with **12 mm** lens and the last one with **2.33 mm** to achieve the required performance for the traffic light detection application. ![camera_image](images/Argus_pic.png) diff --git a/docs/specs/Camera/Wissen_Camera_Installation_Guide.md b/docs/specs/Camera/Wissen_Camera_Installation_Guide.md index 3dee6a60c4153177a88439f9a2763d1e3c889dce..c9e0772a01ab4c1936dbd16749fe57fb2ab96ef4 100644 --- a/docs/specs/Camera/Wissen_Camera_Installation_Guide.md +++ b/docs/specs/Camera/Wissen_Camera_Installation_Guide.md @@ -1,8 +1,8 @@ # Guide for Wissen Camera -Wissen's camera is a joint development venture product of Wissen Technologies and Baidu. This line of camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on AR230 sensor (1080P) and AP0202 ISP from ON Semiconductor. +Wissen's camera is a joint development venture product of Wissen Technologies and Baidu. This line of camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on AR230 sensor (1080P) and AP0202 ISP from ON Semiconductor. -We recommend using ```three cameras```, two with **6 mm** lens and one with **25 mm** lens to achieve the required performance for the traffic light detection application. +We recommend using ```three cameras```, two with **6 mm** lens and one with **25 mm** lens to achieve the required performance for the traffic light detection application. ![images](images/Wissen_pic.png) diff --git a/docs/specs/Dreamland_introduction.md b/docs/specs/Dreamland_introduction.md index 39fe96f4cb3fafd7c3d2109f092eb46bd7e3b3e3..0708ef9d1f3069a8d1c39158e5bf5741dc37e3fe 100644 --- a/docs/specs/Dreamland_introduction.md +++ b/docs/specs/Dreamland_introduction.md @@ -19,7 +19,7 @@ The simulation platform allows users to choose different road types, obstacles, The simulation platform gives users a complete setup to run multiple scenarios parallelly in the cloud and verify modules in the Apollo environment. 3. **Automatic Grading System:** -The current Automatic Grading System tests via 12 metrics: +The current Automatic Grading System tests via 12 metrics: - Collision detection - Red-light violation detection - Speeding detection @@ -43,7 +43,7 @@ The current Automatic Grading System tests via 12 metrics: Through Dreamland, you could run millions of scenarios on the Apollo platform, but broadly speaking, there are two types of scenarios: 1. **Worldsim:** -Worldsim is synthetic data created manually with specific and well-defined obstacle behavior and traffic light status. They are simple yet effective for testing the autonomous car in a well-defined environment. They do however lack the complexity found in real-world traffic conditions. +Worldsim is synthetic data created manually with specific and well-defined obstacle behavior and traffic light status. They are simple yet effective for testing the autonomous car in a well-defined environment. They do however lack the complexity found in real-world traffic conditions. 2. **Logsim:** Logsim is extracted from real world data using our sensors. They are more realistic but also less deterministic. The obstacles perceived may be fuzzy and the traffic conditions are more complicated. @@ -51,7 +51,7 @@ Logsim is extracted from real world data using our sensors. They are more realis ## Key Features 1. **Web Based:** Dreamland does not require you to download large packages or heavy software, it is a web based tool that can be accessed from any browser-friendly device -2. **Highly Customizable Scenarios:** With a comprehensive list of traffic elements you can fine tune Dreamland to suit your niche development. +2. **Highly Customizable Scenarios:** With a comprehensive list of traffic elements you can fine tune Dreamland to suit your niche development. 3. **Rigorous Grading Metrics:** The grading metrics include: - Collision detection - Checks whether there is a collision (any distance between objects less than 0.1m is considered a collision) @@ -77,8 +77,8 @@ Logsim is extracted from real world data using our sensors. They are more realis 3. Upon successful logging in, you will be redirected to the Dreamland Introduction page which includes a basic introduction and offerings ![](images/Dreamland_home.png) -Dreamland platform offers a number of features that you could explore to help you accelerate your autonomous driving testing and deployment. -1. **User Manual** - This section includes documentation to help you get up and running with Dreamland. +Dreamland platform offers a number of features that you could explore to help you accelerate your autonomous driving testing and deployment. +1. **User Manual** - This section includes documentation to help you get up and running with Dreamland. - [Quickstart](https://azure.apollo.auto/user-manual/quick-start): This section will walk you through testing your build using our APIs and also how to manage and edit existing scenarios. - [Scenario Editor](): The scenario editor is a new feature to be launched in Apollo 5.0 which enables our developers to create their own scenarios to test niche aspects of their algorithm. In order to use this feature, you will have to comeplete the form on the screen as seen in the image below: @@ -103,7 +103,7 @@ Dreamland platform offers a number of features that you could explore to help yo 4. **Task Management:** Like Scenario Editor, Task Management is also a service offering currently in beta testing and open only to selective partners. In order to use this feature, you will have to comeplete the form on the screen and request for activation. The Task Management tab is extremely useful when testing any one particular type of scenario like side pass or U-turns. It helps test your algorithms against very specific test cases. -Within the Task Management page, you can run a `New Task` to test your personal Apollo github repository against a list of scenarios. You will receive a summary of the task which highlights if the build passed or not, along with the passing rate of both worldsim and logsim scenarios and finally the total miles tested virtually. You can also view the number of failed scenarios along with a description detailing the failed timestamp and the grading metric which failed. Finally, you can run the comparison tool to check how your build performed versus previous builds. +Within the Task Management page, you can run a `New Task` to test your personal Apollo github repository against a list of scenarios. You will receive a summary of the task which highlights if the build passed or not, along with the passing rate of both worldsim and logsim scenarios and finally the total miles tested virtually. You can also view the number of failed scenarios along with a description detailing the failed timestamp and the grading metric which failed. Finally, you can run the comparison tool to check how your build performed versus previous builds. 5. **Daily Build:** The Daily Build shows how well the current Apollo official Github repository runs against all the scenarios. It is run once every morning Pacific Time. diff --git a/docs/specs/Guideline_sensor_Installation_apollo_2.5.md b/docs/specs/Guideline_sensor_Installation_apollo_2.5.md index 67d590237741fc6ac6d4a9b978105875563eab20..69b3f7d451fc1dd629c797bd74ce117486842a8b 100644 --- a/docs/specs/Guideline_sensor_Installation_apollo_2.5.md +++ b/docs/specs/Guideline_sensor_Installation_apollo_2.5.md @@ -14,7 +14,7 @@ Peripherals Unit: millimeter (mm) -Origin: The center of the rear wheel Axle +Origin: The center of the rear wheel Axle @@ -37,7 +37,7 @@ One camera with 6mm-lens should face the front of ego-vehicle. The front-facing **Figure 3. Example setup of cameras** -After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file. +After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file. #### Verification of camera Setups The orientation of all three cameras should be all zeros. When the camera is installed, it is required to record a rosbag by driving a straight highway. By the replay of rosbag, the camera orientation should be re-adjusted to have pitch, yaw, and roll angles to be zero degree. When the camera is correctly installed, the horizon should be at the half of image width and not tilted. The vanishing point should be also at the center of the image. Please see the image below for the ideal camera setup. @@ -46,7 +46,7 @@ The orientation of all three cameras should be all zeros. When the camera is ins **Figure 4. An example of an image after camera installation. The horizon should be at the half of image height and not tilted. The vanishing point should be also at the center of the image. The red lines show the center of the width and the height of the image.** -The example of estimated translation parameters is shown below. +The example of estimated translation parameters is shown below. ``` header: seq: 0 @@ -61,9 +61,9 @@ transform: y: -0.5 z: 0.5 w: -0.5 - translation: + translation: x: 1.895 y: -0.235 - z: 1.256 + z: 1.256 ``` If angles are not zero, they need to be calibrated and represented in quarternion (see above stransformation->rotation). diff --git a/docs/specs/Guideline_sensor_Installation_apollo_3.0.md b/docs/specs/Guideline_sensor_Installation_apollo_3.0.md index d5298d8df27dd8fbdce3f1c039d623c8ab0e2a75..6596eb601d0782c1aa06111ae579de8bad4e44b0 100644 --- a/docs/specs/Guideline_sensor_Installation_apollo_3.0.md +++ b/docs/specs/Guideline_sensor_Installation_apollo_3.0.md @@ -14,7 +14,7 @@ Peripherals Unit: millimeter (mm) -Origin: The center of the rear wheel Axle +Origin: The center of the rear wheel Axle @@ -37,7 +37,7 @@ One camera with 6mm-lens should face the front of ego-vehicle. The front-facing **Figure 3. Example setup of cameras** -After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file. +After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file. #### Verification of camera Setups The orientation of all three cameras should be all zeros. When the camera is installed, it is required to record a rosbag by driving a straight highway. By the replay of rosbag, the camera orientation should be re-adjusted to have pitch, yaw, and roll angles to be zero degree. When the camera is correctly installed, the horizon should be at the half of image height and not tilted. The vanishing point should be also at the center of the image. Please see the image below for the ideal camera setup. @@ -46,7 +46,7 @@ The orientation of all three cameras should be all zeros. When the camera is ins **Figure 4. An example of an image after camera installation. The horizon should be at the half of image height and not tilted. The vanishing point should be also at the center of the image. The red lines show the center of the width and the height of the image.** -The example of estimated translation parameters is shown below. +The example of estimated translation parameters is shown below. ``` header: seq: 0 @@ -61,9 +61,9 @@ transform: y: -0.5 z: 0.5 w: -0.5 - translation: + translation: x: 1.895 y: -0.235 - z: 1.256 + z: 1.256 ``` If angles are not zero, they need to be calibrated and represented in quaternion (see above transform->rotation). diff --git a/docs/specs/Lidar/Hesai_Pandora_Installation_Guide.md b/docs/specs/Lidar/Hesai_Pandora_Installation_Guide.md index 4de06d99a0642d9ef06a380bd73b1944f131ce9b..ff1d44867612ad9cbacd864be31ae7f9e58d5fcf 100644 --- a/docs/specs/Lidar/Hesai_Pandora_Installation_Guide.md +++ b/docs/specs/Lidar/Hesai_Pandora_Installation_Guide.md @@ -6,7 +6,7 @@ Pandora is an all-in-one sensor kit for environmental sensing for self-driving c #### Mounting -A customized mounting structure is required to successfully mount a Pandora kit on top of a vehicle. This structure must provide rigid support to the LiDAR system while raising the LiDAR to a certain height above the ground under driving conditions. This height should prevent the laser beams from the LiDAR being blocked by the front and/or rear of the vehicle. The actual height needed for the LiDAR depends on the design of the vehicle and the mounting point of the LiDAR relative to the vehicle. While planning the mounting height and angle, please read through the manual for additional details. +A customized mounting structure is required to successfully mount a Pandora kit on top of a vehicle. This structure must provide rigid support to the LiDAR system while raising the LiDAR to a certain height above the ground under driving conditions. This height should prevent the laser beams from the LiDAR being blocked by the front and/or rear of the vehicle. The actual height needed for the LiDAR depends on the design of the vehicle and the mounting point of the LiDAR relative to the vehicle. While planning the mounting height and angle, please read through the manual for additional details. ``` If for some reason, the LiDAR beam has to be blocked by the vehicle, it might be necessary to apply a filter to remove these points while processing the data received. @@ -14,7 +14,7 @@ If for some reason, the LiDAR beam has to be blocked by the vehicle, it might be #### Wiring -Each Pandora includes a cable connection box and a corresponding cable bundle to connect to the power supply, the computer (ethernet) and the GPS timesync source. +Each Pandora includes a cable connection box and a corresponding cable bundle to connect to the power supply, the computer (ethernet) and the GPS timesync source. ![LiDAR_Cable](images/pandora_cable.png) @@ -28,10 +28,10 @@ Each Pandora includes a cable connection box and a corresponding cable bundle to 3. **Connection to the GPS** - The Pandora kit requires the recommended minimum specific GPS/Transit data (GPRMC) and pulse per second (PPS) signal to synchronize to GPS time. A customized connection is needed to establish the communication between the GPS receiver and the LiDAR. Please read your GPS manual for information on how to collect the output of those signals. + The Pandora kit requires the recommended minimum specific GPS/Transit data (GPRMC) and pulse per second (PPS) signal to synchronize to GPS time. A customized connection is needed to establish the communication between the GPS receiver and the LiDAR. Please read your GPS manual for information on how to collect the output of those signals. + + On the interface box, a GPS port (SM06B-SRSS-TB) is provided to send the GPS signals as an input to the LiDAR. The detailed pinout is shown in the image below. - On the interface box, a GPS port (SM06B-SRSS-TB) is provided to send the GPS signals as an input to the LiDAR. The detailed pinout is shown in the image below. - | Pin # | Input/output | Comment | | ----- | ------------ | ----------------------------------------------- | | 1 | Input | PPS signal (3.3V) | diff --git a/docs/specs/Open_Space_Planner.md b/docs/specs/Open_Space_Planner.md index adeccb871952d314046c207af6af79d2ccb21aba..a6ae7b2e69c0ffd22993bb2ccc69ab581c7fb634 100644 --- a/docs/specs/Open_Space_Planner.md +++ b/docs/specs/Open_Space_Planner.md @@ -40,7 +40,7 @@ Once this stage is complete, the output is directly sent to the Control module t ![](images/os_step3.png) -## Use Cases +## Use Cases Currently Open Space Planner is used for 2 parking scenarios in the planning stage namely: diff --git a/docs/specs/Radar/Continental_ARS408-21_Radar_Installation_Guide.md b/docs/specs/Radar/Continental_ARS408-21_Radar_Installation_Guide.md index d81ed02bc68dcd07a82c002b70abd56ad2d0b647..d7b0e92939fdd308e3e3a73336cff50e9d67b05c 100644 --- a/docs/specs/Radar/Continental_ARS408-21_Radar_Installation_Guide.md +++ b/docs/specs/Radar/Continental_ARS408-21_Radar_Installation_Guide.md @@ -2,9 +2,9 @@ ``` The ARS408 realized a broad field of view by two independent scans in conjunction with the high range functions -like Adaptive Cruise Control, Forward Collision Warning and Emergency Brake Assist can be easily implemented. -Its capability to detect stationary objects without the help of a camera system emphasizes its performance. The ARS408 is a best in class radar, -especially for the stationary target detection and separation. +like Adaptive Cruise Control, Forward Collision Warning and Emergency Brake Assist can be easily implemented. +Its capability to detect stationary objects without the help of a camera system emphasizes its performance. The ARS408 is a best in class radar, +especially for the stationary target detection and separation. ----Continental official website ``` @@ -39,6 +39,5 @@ The following diagram contains the range of the ARS-408-21 Radar: This device is `Apollo Platform Supported` - - \ No newline at end of file + diff --git a/docs/specs/Radar/Racobit_B01HC_Radar_Installation_Guide.md b/docs/specs/Radar/Racobit_B01HC_Radar_Installation_Guide.md index cdcd0ca3d7edf01f3a7dcba034ef376b0ac6f8e4..a28bb6c6671a34f5c5c8e246082cdbacde949201 100644 --- a/docs/specs/Radar/Racobit_B01HC_Radar_Installation_Guide.md +++ b/docs/specs/Radar/Racobit_B01HC_Radar_Installation_Guide.md @@ -1,6 +1,6 @@ ## Installation Guide of Racobit B01HC Radar -Racobit developed one Radar product with **60 degree FOV** and **150 m** detection range for autonomous driving needs. +Racobit developed one Radar product with **60 degree FOV** and **150 m** detection range for autonomous driving needs. ![radar_image](images/b01hc.png) @@ -11,12 +11,11 @@ Racobit developed one Radar product with **60 degree FOV** and **150 m** detecti 3. Connect the power cable to **12VDC** power supply. 4. Connect the CAN output to the CAN interface of the IPC. 5. You should be able to receive the CAN messages through the CAN port once the Radar is powered. -6. Please discuss with the vendor for additional support if needed while integrating it with your vehicle. +6. Please discuss with the vendor for additional support if needed while integrating it with your vehicle. ## Disclaimer This device is `Apollo Hardware Development Platform Supported` - - \ No newline at end of file + diff --git a/docs/specs/calibration_table/control_calibration.md b/docs/specs/calibration_table/control_calibration.md index 60f0b6a54da342f71df6382c5961e6c1c306eb81..74f56d55d15021458ee694e25a9fe8f35569ca00 100644 --- a/docs/specs/calibration_table/control_calibration.md +++ b/docs/specs/calibration_table/control_calibration.md @@ -90,7 +90,7 @@ Before uploading your data, take a note of: ``` Origin Folder -> Task Folder ->Vehicle Folder -> Records + Configuration files ``` -1. A **task** folder needs to be created for your calibration job, such as task001, task002... +1. A **task** folder needs to be created for your calibration job, such as task001, task002... 1. A vehicle folder needs to be created for your vehicle. The name of the folder should be the same as seen in Dreamview 1. Inside your folder, create a **Records** folder to hold the data 1. Store all the **Configuration files** along with the Records folder, within the **Vehicle** folder diff --git a/docs/specs/dreamview_usage_table.md b/docs/specs/dreamview_usage_table.md index b3c757d6c42931fff8bc5ab562939f6ae27f3119..5c93dd84c020eb66ab69e2a4a9fa4c45738ef468 100644 --- a/docs/specs/dreamview_usage_table.md +++ b/docs/specs/dreamview_usage_table.md @@ -1,7 +1,7 @@ # Dreamview Usage Table -Dreamview is a web application that, -1. visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc. +Dreamview is a web application that, +1. visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc. 2. provides human-machine interface for users to view hardware status, turn on/off of modules, and start the autonomous driving car. 3. provides debugging tools, such as PnC Monitor to efficiently track module issues. @@ -11,16 +11,16 @@ The application layout is divided into several regions: header, sidebar, main vi ### Header The Header has 3 drop-downs that can be set as shown: -![](images/dreamview_usage_table/header.png) +![](images/dreamview_usage_table/header.png) The Co-Driver switch is used to detect disengagement event automatically. Once detected, Dreamview will display a pop-up of the data recorder window for the co-driver to enter a new drive event. Depending on the mode chosen from the mode selector, the corresponding modules and commands, defined in [hmi.conf](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/conf/hmi.conf), will be presented in the **Module Controller**, and **Quick Start**, respectively. -Note: navigation mode is for the purpose of the low-cost feature introduced in Apollo 2.5. Under this mode, Baidu (or Google) Map presents the absolute position of the ego-vehicle, while the main view has all objects and map elements presented in relative positions to the ego-vehicle. +Note: navigation mode is for the purpose of the low-cost feature introduced in Apollo 2.5. Under this mode, Baidu (or Google) Map presents the absolute position of the ego-vehicle, while the main view has all objects and map elements presented in relative positions to the ego-vehicle. ### Sidebar and Tool View -![](images/dreamview_usage_table/sidebar.png) +![](images/dreamview_usage_table/sidebar.png) Sidebar panel controls what is displayed in the tool view described below: ### Tasks @@ -38,12 +38,12 @@ All the tasks that you could perform in DreamView: * **Console**: monitor messages from the Apollo platform ### Module Controller A panel to view the hardware status and turn the modules on/off -![](images/dreamview_usage_table/module_controller.png) +![](images/dreamview_usage_table/module_controller.png) ### Layer Menu A toggle menu for visual elements displays. -![](images/dreamview_usage_table/layer_menu.png) +![](images/dreamview_usage_table/layer_menu.png) ### Route Editing A visual tool to plan a route before sending the routing request to the Routing module @@ -51,23 +51,23 @@ A visual tool to plan a route before sending the routing request to the Routing ### Data Recorder A panel to report issues to drive event topic ("/apollo/drive_event") to rosbag. -![](images/dreamview_usage_table/data_recorder.png) +![](images/dreamview_usage_table/data_recorder.png) ### Default Routing -List of predefined routes or single points, known as point of interest (POI). +List of predefined routes or single points, known as point of interest (POI). -![](images/dreamview_usage_table/default_routing.png) +![](images/dreamview_usage_table/default_routing.png) -If route editing is on, routing point(s) can be added visually on the map. +If route editing is on, routing point(s) can be added visually on the map. If route editing is off, clicking a desired POI will send a routing request to the server. If the selected POI contains only a point, the start point of the routing request is the current position of the autonomous car; otherwise, the start position is the first point from the desired route. To edit POIs, see [default_end_way_point.txt](https://github.com/ApolloAuto/apollo/blob/master/modules/map/data/demo/default_end_way_point.txt) file under the directory of the Map. For example, if the map selected from the map selector is "Demo", then [default_end_way_point.txt](https://github.com/ApolloAuto/apollo/blob/master/modules/map/data/demo/default_end_way_point.txt) is located under `modules/map/data/demo`. -### Main view: +### Main view: Main view animated 3D computer graphics in a web browser. -![](images/dreamview_usage_table/mainview.png) +![](images/dreamview_usage_table/mainview.png) Elements in the main view are listed in the table below: @@ -82,7 +82,7 @@ Elements in the main view are listed in the table below: | ![](images/dreamview_usage_table/0clip_image038.png) |
  • Nudge object decision -- the orange zone indicates the area to avoid
| | ![](images/dreamview_usage_table/0clip_image062.png) |
  • The green thick curvy band indicates the planned trajectory
| - + #### Obstacles | Visual Element | Depiction Explanation | @@ -124,8 +124,8 @@ When a yield decision is made based on the "Right of Way" laws at a stop-sign in ##### Stop reasons -When a STOP decision fence is shown, the reason to stop is displayed on the right side of the stop icon. Possible reasons and the corresponding icons are: - +When a STOP decision fence is shown, the reason to stop is displayed on the right side of the stop icon. Possible reasons and the corresponding icons are: + | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image040.png) |
  • **Clear-zone in front**
| @@ -181,13 +181,13 @@ The Planning/Control tab from the monitor plots various graphs to reflect the in #### Customizable Graphs for Planning Module [planning_internal.proto](https://github.com/ApolloAuto/apollo2/blob/master/modules/planning/proto/planning_internal.proto#L180) is a protobuf that stores debugging information, which is processed by dreamview server and send to dreamview client to help engineers debug. For users who want to plot their own graphs for new planning algorithms: 1. Fill in the information of your "chart" defined in planning_internal.proto. -2. X/Y axis: [**chart.proto** ](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto) has "Options" that you could set for axis which include +2. X/Y axis: [**chart.proto** ](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto) has "Options" that you could set for axis which include * min/max: minimum/maximum number for the scale * label_string: axis label * legend_display: to show or hide a chart legend. -3. Dataset: - * Type: each graph can have multiple lines, polygons, and/or car markers defined in [**chart.proto**](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto): +3. Dataset: + * Type: each graph can have multiple lines, polygons, and/or car markers defined in [**chart.proto**](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto): * Line: @@ -197,11 +197,11 @@ The Planning/Control tab from the monitor plots various graphs to reflect the in * Car: - + * Label: each dataset must have a unique "Label" to each chart in order to help dreamview identify which dataset to update. * Properties: for polygon and line, you can set styles. Dreamview uses **Chartjs.org** for graphs. Below are common ones: - | Name | Description | Example | + | Name | Description | Example | | ----------- | --------------------------------------- | ----------------------- | | color | The line color | rgba(27, 249, 105, 0.5) | | borderWidth | The line width | 2 | diff --git a/docs/specs/dynamic_model.md b/docs/specs/dynamic_model.md index 72974daa2c46439f02811fee2d77969e253f6f59..a7401f4253fb4788a8fa1e80d85c4414e3dc3d80 100644 --- a/docs/specs/dynamic_model.md +++ b/docs/specs/dynamic_model.md @@ -7,7 +7,7 @@ Simulation is a vital part of autonomous driving especially in Apollo where most The architecture diagram for how Dynamic model works is included below: ![](images/architecture.png) -The Control module recieves input via planning and the vehicle and uses it effectively to generate the output path which is then fed into the Dynamic model. +The Control module recieves input via planning and the vehicle and uses it effectively to generate the output path which is then fed into the Dynamic model. ## Examples @@ -19,14 +19,14 @@ The green lines in the graph below are the actual planning trajectories for thos ``` 1. **Longitudinal Control** -A pedestrian walk across the road and the ego car needs to stop by applying the brake +A pedestrian walk across the road and the ego car needs to stop by applying the brake ![](images/Longitudinal.png) 2. **Lateral Control** The ego car has to make a wide-angle U-turn in this scenario. As seen in the image below, the steering turn is at 64%. You can also monitor the performance of the dynamic model on the right against the actual planned trajectory. -3. **Backward Behavior** +3. **Backward Behavior** The ego car has to park itself in a designated spot. This scenario is complex as it requires a mixture of forward and backward (reverse) driving and requires a high level of accuracy from the control module. As you can see in the image below, the steering turn required is at `-92%`. Additional details on this example can be seen in the planning module's Park scenario. ![](images/Backward.png) diff --git a/docs/specs/perception_apollo_3.0.md b/docs/specs/perception_apollo_3.0.md index 17e40ed79348192e4820aaf618878c0d83b0d289..e41eea24204d3d55d680e8e7668effee73aaa9b9 100644 --- a/docs/specs/perception_apollo_3.0.md +++ b/docs/specs/perception_apollo_3.0.md @@ -10,7 +10,7 @@ Apollo 3.0 introduced a production level solution for the low-cost, closed venue * **Asynchronous sensor fusion**: unlike the previous version, Perception in Apollo 3.0 is capable of consolidating all the information and data points by asynchronously fusing LiDAR, Radar and Camera data. Such conditions allow for more comprehensive data capture and reflect more practical sensor environments. * **Online pose estimation**: This new feature estimates the pose of an ego-vehicle for every single frame. This feature helps to drive through bumps or slopes on the road with more accurate 3D scene understanding. * **Ultrasonic sensors**: Perception in Apollo 3.0 now works with ultrasonic sensors. The output can be used for Automated Emergency Brake (AEB) and vertical/perpendicular parking. - * **Whole lane line**: Unlike previous lane line segments, this whole lane line feature will provide more accurate and long range detection of lane lines. + * **Whole lane line**: Unlike previous lane line segments, this whole lane line feature will provide more accurate and long range detection of lane lines. * **Visual localization**: Camera's are currently being tested to aide and enhance localization * **16 beam LiDAR support** @@ -62,9 +62,9 @@ The lane can be represented by multiple sets of polylines such as next left lane A CIPV is the closest vehicle in the ego-lane. An object is represented by 3D bounding box and its 2D projection from the top-down view localizes the object on the ground. Then, each object will be checked if it is in the ego-lane or not. Among the objects in our ego-lane, the closest one will be selected as a CIPV. ### Tailgating -Tailgating is a maneuver to follow the vehicle or object in front of the autonomous car. From the tracked objects and ego-vehicle motion, the trajectories of objects are estimated. This trajectory will guide how the objects are moving as a group on the road and the future trajectory can be predicted. There is two kinds of tailgating, the one is pure tailgating by following the specific car and the other is CIPV-guided tailgating, which the ego-vehicle follows the CIPV's trajectory when the no lane line is detected. +Tailgating is a maneuver to follow the vehicle or object in front of the autonomous car. From the tracked objects and ego-vehicle motion, the trajectories of objects are estimated. This trajectory will guide how the objects are moving as a group on the road and the future trajectory can be predicted. There is two kinds of tailgating, the one is pure tailgating by following the specific car and the other is CIPV-guided tailgating, which the ego-vehicle follows the CIPV's trajectory when the no lane line is detected. -The snapshot of visualization of the output is shown in the figure below: +The snapshot of visualization of the output is shown in the figure below: ![Image](images/perception_visualization_apollo_3.0.png) The figure above depicts visualization of the Perception output in Apollo 3.0. The top left image shows image-based output. The bottom-left image shows the 3D bounding box of objects. Therefore, the left image shows 3-D top-down view of lane lines and objects. The CIPV is marked with a red bounding box. The yellow lines depicts the trajectory of each vehicle diff --git a/docs/specs/perception_apollo_5.0.md b/docs/specs/perception_apollo_5.0.md index 3a05ad8617da590dbb107f798fc2bb063f6eb868..6ba6404fe7feec125706999a79a59d302dddb161 100644 --- a/docs/specs/perception_apollo_5.0.md +++ b/docs/specs/perception_apollo_5.0.md @@ -25,7 +25,7 @@ To learn more about individual sub-modules, please visit [Perception - Apollo 3. ### Supports PaddlePaddle -The Apollo platform's perception module actively depended on Caffe for its modelling, but will now support PaddlePaddle, an open source platform developed by Baidu to support its various deep learning projects. +The Apollo platform's perception module actively depended on Caffe for its modelling, but will now support PaddlePaddle, an open source platform developed by Baidu to support its various deep learning projects. Some features include: - **PCNNSeg**: Object detection from 128-channel lidar or a fusion of three 16-channel lidars using PaddlePaddle - **PCameraDetector**: Object detection from a camera diff --git a/docs/technical_tutorial/apollo_2.5_technical_tutorial.md b/docs/technical_tutorial/apollo_2.5_technical_tutorial.md index 938a6709d053a517bed2ec24ebfc1bbf19f69db4..35dd5bf5f261ddfa15fbc407f26f7508a348e0b1 100644 --- a/docs/technical_tutorial/apollo_2.5_technical_tutorial.md +++ b/docs/technical_tutorial/apollo_2.5_technical_tutorial.md @@ -72,4 +72,4 @@ * [How to use Apollo 2.5 navigation mode](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md "[How to use Apollo 2.5 navigation mode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/dreamview_usage_table.md "Introduction of Dreamview") - +