提交 d10330a2 编写于 作者: X Xiangquan Xiao

Robot: Remove trailing spaces.

上级 b81520dd
......@@ -39,9 +39,9 @@ Yes, currently all comments need to be made in Doxygen.
---
## If I cannot solve my build problems, what is the most effective way to ask for help?
Many build problems are related to the environment settings.
Many build problems are related to the environment settings.
1. Run the script to get your environment: `bash scripts/env.sh >& env.txt`
1. Run the script to get your environment: `bash scripts/env.sh >& env.txt`
2. Post the content of env.txt to our Github issues page and someone from our team will get in touch with you.
......@@ -52,10 +52,10 @@ Use these ports for HMI and Dreamview:
---
## Why there is no ROS environment in dev docker?
The ROS package is downloaded when you start to build apollo:
`bash apollo.sh build`.
The ROS package is downloaded when you start to build apollo:
`bash apollo.sh build`.
1. Run the following command inside Docker to set up the ROS environment after the build is complete:
1. Run the following command inside Docker to set up the ROS environment after the build is complete:
`source /apollo/scripts/apollo_base.sh`
2. Run ROS-related commands such as rosbag, rostopic and so on.
......
......@@ -12,7 +12,7 @@ The system requirement for building Apollo is Ubuntu 14.04. Using a Docker conta
To install docker, you may refer to
[Official guide to install the Docker-ce](https://docs.docker.com/install/linux/docker-ce/ubuntu).
Don't forget to test it using
Don't forget to test it using
[post-installation steps for Linux](https://docs.docker.com/install/linux/linux-postinstall).
## Build Apollo
......@@ -55,16 +55,16 @@ sudo dpkg -i <file>.deb
sudo apt-get install -f # Install dependencies
```
### Start VSCode
Start VSCode with the following command:
Start VSCode with the following command:
```bash
code
```
### Open the Apollo project in VSCode
Use the keyboard shortcut **(Ctrl+K Ctrl+O)** to open the Apollo project.
Use the keyboard shortcut **(Ctrl+K Ctrl+O)** to open the Apollo project.
### Build the Apollo project in VSCode
Use the keyboard shortcut **(Ctrl+Shift+B)** to build the Apollo project.
Use the keyboard shortcut **(Ctrl+Shift+B)** to build the Apollo project.
### Run all unit tests for the Apollo project in VSCode
Select the "Tasks->Run Tasks..." menu command and click "run all unit tests for the apollo project" from a popup menu to check the code style for the Apollo project.
Select the "Tasks->Run Tasks..." menu command and click "run all unit tests for the apollo project" from a popup menu to check the code style for the Apollo project.
If you are currently developing on 16.04, you will get a build error.
As seen in the image below, 2 perception tests. To avoid this build error, refer to the [how to build Apollo using Ubuntu 16](how_to_run_apollo_2.5_with_ubuntu16.md).
......@@ -72,9 +72,9 @@ As seen in the image below, 2 perception tests. To avoid this build error, refer
![Build error](images/build_fail.png)
### Run a code style check task for the Apollo project in VSCode
Select the "Tasks->Run Tasks..." menu command and click "code style check for the apollo project" from a popup menu to check the code style for the Apollo project.
Select the "Tasks->Run Tasks..." menu command and click "code style check for the apollo project" from a popup menu to check the code style for the Apollo project.
### Clean the Apollo project in VSCode
Select the "Tasks->Run Tasks..." menu command and click "clean the apollo project" from a popup menu to clean the Apollo project.
Select the "Tasks->Run Tasks..." menu command and click "clean the apollo project" from a popup menu to clean the Apollo project.
### Change the building option
You can change the "build" option to another one such as "build_gpu" (refer to the "apollo.sh" file for details) in ".vscode/tasks.json"
......
......@@ -100,7 +100,7 @@ Program terminated with signal SIGILL, Illegal instruction.
#9 0x0000000000000000 in ?? ()
(gdb) q
@in_dev_docker:/apollo$ addr2line -C -f -e /usr/local/lib/libpcl_sample_consensus.so.1.7.2 0x375bec
@in_dev_docker:/apollo$ addr2line -C -f -e /usr/local/lib/libpcl_sample_consensus.so.1.7.2 0x375bec
double boost::math::detail::erf_inv_imp<double, boost::math::policies::policy<boost::math::policies::promote_float<false>, boost::math::policies::promote_double<false>, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy> >(double const&, double const&, boost::math::policies::policy<boost::math::policies::promote_float<false>, boost::math::policies::promote_double<false>, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy, boost::math::policies::default_policy> const&, mpl_::int_<64> const*)
??:?
```
......@@ -118,7 +118,7 @@ apolloauto/apollo map_volume-sunnyvale_big_loop-latest 80aca30fa08a 3
apolloauto/apollo localization_volume-x86_64-latest be947abaa650 2 months ago 5.74MB
apolloauto/apollo map_volume-sunnyvale_loop-latest 36dc0d1c2551 2 months ago 906MB
build cmd:
build cmd:
in_dev_docker:/apollo$ ./apollo.sh build_no_perception dbg
```
2. Compile pcl and copy the pcl library files to `/usr/local/lib`:
......@@ -130,7 +130,7 @@ See [/apollo/WORKSPACE.in](https://github.com/ApolloAuto/apollo/blob/master/WORK
Inside docker:
```
(to keep pcl in host, we save pcl under /apollo)
cd /apollo
cd /apollo
git clone https://github.com/PointCloudLibrary/pcl.git
git checkout -b <your pcl-lib version> pcl-<your pcl-lib version>
......@@ -145,9 +145,9 @@ index f0a5600..42c182e 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -7,6 +7,15 @@ endif()
set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "possible configurations" FORCE)
+if (CMAKE_VERSION VERSION_LESS "3.1")
+# if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
+ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
......@@ -193,7 +193,7 @@ If CPU does not support AVX instructions, and you gdb the coredump file under /a
```
Program terminated with signal SIGILL, Illegal instruction.
#0 0x000000000112b70a in std::_Hashtable<std::string, std::string, std::allocator<std::string>, std::__detail::_Identity, std::equal_to<std::string>, google::protobuf::hash<std::string>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_Hashtable (this=0x3640288, __bucket_hint=10,
#0 0x000000000112b70a in std::_Hashtable<std::string, std::string, std::allocator<std::string>, std::__detail::_Identity, std::equal_to<std::string>, google::protobuf::hash<std::string>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_Hashtable (this=0x3640288, __bucket_hint=10,
__h1=..., __h2=..., __h=..., __eq=..., __exk=..., __a=...)
---Type <return> to continue, or q <return> to quit---
at /usr/include/c++/4.8/bits/hashtable.h:828
......
......@@ -8,7 +8,7 @@ In order to run your data on this tool, please follow the steps below:
1. Build Apollo as recommended in the [Build Guide](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md) until the `./apollo.sh build` step.
2. Once instde dev docker and after running `./apollo.sh build` please go to the folder `modules/tools/map_datachecker/`
3. Starting the server:
3. Starting the server:
```bash
bash server.sh start
```
......@@ -54,5 +54,5 @@ In order to run your data on this tool, please follow the steps below:
## Tips
1. The default value of `cmd` is `start`
1. The default value of `cmd` is `start`
2. All error messages will be printed to help better prepare your map data. Please follow the error messages exactly as recommended
......@@ -45,7 +45,7 @@ Before running the visualizer, you may setup the data directories and the algori
```
/apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool
```
Now you will see a pop-up window showing the perception result with point cloud frame-by-frame. The obstacles are shown with a purple rectangle bounding boxes. There are three modes to visualize the point cloud with/without the ROI area:
* Showing all the point cloud with grey color;
* Showing the point cloud of ROI area only with green color;
Now you will see a pop-up window showing the perception result with point cloud frame-by-frame. The obstacles are shown with a purple rectangle bounding boxes. There are three modes to visualize the point cloud with/without the ROI area:
* Showing all the point cloud with grey color;
* Showing the point cloud of ROI area only with green color;
* Showing the point cloud of ROI area with green color and that of other areas with grey color. You may press the `S` key on keyboard to switch the modes in turn.
......@@ -56,7 +56,7 @@ cd install
sudo ./install_kernel.sh
```
3. Reboot your system by the `reboot` command
4. Build the ESD CAN driver source code
4. Build the ESD CAN driver source code
Now you need to build the ESD CAN driver source code according to [ESDCAN-README.md](https://github.com/ApolloAuto/apollo-kernel/blob/master/linux/ESDCAN-README.md)
## Build your own kernel.
......
......@@ -13,7 +13,7 @@ This quick start focuses on Apollo 1.5 new features. For general Apollo concepts
Use your favorite browser to access HMI web service in your host machine browser with URL http://localhost:8887
4. Select Vehicle and Map
You'll be required to setup profile before doing anything else. Click the dropdown menu to select your HDMap and vehicle in use. The list are defined in [HMI config file](https://raw.githubusercontent.com/ApolloAuto/apollo/master/modules/hmi/conf/config.pb.txt).
*Note: It's also possible to change profile on the right panel of HMI, but just remember to click "Reset All" on the top-right corner to restart the system.*
......
......@@ -72,4 +72,4 @@
* [How to use Apollo 2.5 navigation mode](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md "[How to use Apollo 2.5 navigation mode")
* [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/dreamview_usage_table.md "Introduction of Dreamview")
......@@ -2,9 +2,9 @@
Apollo Sensor Unit (ASU) is designed to work with Industrial PC (IPC) to implement sensor fusion, vehicle control and network access in Apollo's autonomous driving platform.
The ASU system provides sensor interfaces to collect data from various sensors, including cameras, Lidars, Radars, and Ultrasonic Sensors. The system also utilizes pulse per second (PPS) and GPRMC signals from GNSS receiver to implement data collection synchronization for the camera and LiDAR sensors.
The ASU system provides sensor interfaces to collect data from various sensors, including cameras, Lidars, Radars, and Ultrasonic Sensors. The system also utilizes pulse per second (PPS) and GPRMC signals from GNSS receiver to implement data collection synchronization for the camera and LiDAR sensors.
The communication between the ASU and the IPC is through PCI Express Interface. ASU collects sensor data and passes to IPC via PCI Express Interface, and the IPC uses the ASU to send out Vehicle Control commands in the Controller Area Network (CAN) protocol.
The communication between the ASU and the IPC is through PCI Express Interface. ASU collects sensor data and passes to IPC via PCI Express Interface, and the IPC uses the ASU to send out Vehicle Control commands in the Controller Area Network (CAN) protocol.
In addition, Lidar connectivity via Ethernet, WWAN gateway via 4G LTE module, and WiFi access point via WiFi module will be enabled in the future releases.
......@@ -14,19 +14,19 @@ In addition, Lidar connectivity via Ethernet, WWAN gateway via 4G LTE module, an
#### Front Panel Connectors
1. External GPS PPS / GPRMC Input Port
1. External GPS PPS / GPRMC Input Port
2. FAKRA Camera Data Input Port (5 ports)
3. 100 Base-TX/1000 Base-T Ethernet Port (2 Ports)
4. KL-15 (AKA Car Ignition) Signal Input Port
3. 100 Base-TX/1000 Base-T Ethernet Port (2 Ports)
4. KL-15 (AKA Car Ignition) Signal Input Port
#### Rear Panel Connectors
#### Rear Panel Connectors
1. General purpose UART port(reserved)
1. General purpose UART port(reserved)
2. External PCI Express Port (Support X4 or X8) For connections to IPC, please use EXTN port.
3. GPS PPS/GPRMC Output Rectangular Port (3 Ports) for LiDAR
4. Power and PPS/GPRMC Cylindrical Output Port for Stereo Camera/LiDAR
5. CAN Bus (4 Ports)
6. Main Power Input Connector
6. Main Power Input Connector
### Purchase Channels
......@@ -36,21 +36,21 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
1. Power Cable
The main power is from vehicle battery, 9V ~ 36V, 120W.
The main power is from vehicle battery, 9V ~ 36V, 120W.
![conn-DTF13-2P](images/conn-DTF13-2P.jpeg)
|MFR|MPN|Description|
|---------------|--------|-----------|
|TE Connectivity|DTF13-2P|DT RECP ASM|
| PIN # | NAME | I/O | Description |
| ----- | ---- | ---- | ------------------ |
| 1 | 12V | PWR | 12V (9V~36V, 120W) |
| 2 | GND | PWR | GROUND |
2. FPD-Link III cameras.
2. FPD-Link III cameras.
There are 5 FAKRA connectors for FPD Link III cameras in ASU Front Panel labeled with 1~5, respectively, from right to left. The ASU can support up to 5 cameras by enabling Camera 1 ~ 5 whose deserializers (TI, DS90UB914ATRHSTQ1) convert FPD Link III signals into parallel data signals.
......@@ -71,7 +71,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
| MFR | MPN | Description |
| :-------------- | --------- | ----------------------------------------- |
| TE Connectivity | 1565749-1 | Automotive Connectors 025 CAP ASSY, 4 Pin |
| PIN # | NAME | I/O | Description |
| ----- | ----- | ----- | ------------------------------------------------------------ |
......@@ -82,14 +82,14 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
4. GPS synchronization output channels
ASU forwards the duplicated GPS PPS/GPRMC from external GPS to the customized 8 Pin connector. This connector provides 3 sets of PPS/GPRMC output for sensors that need to be synchronized, such as LiDARs, etc.
ASU forwards the duplicated GPS PPS/GPRMC from external GPS to the customized 8 Pin connector. This connector provides 3 sets of PPS/GPRMC output for sensors that need to be synchronized, such as LiDARs, etc.
![1376350-2](images/1376350-2.jpeg)
|MFR| MPN| Description|
| --------------- | --------- | ------------------------------------------------- |
| TE Connectivity | 1376350-2 | Automotive Connectors 025 I/O CAP HSG ASSY, 8 Pin |
| PIN # | NAME | I/O | Description |
| ----- | ------ | ------ | ------------------------------------------------------- |
......@@ -111,7 +111,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
| MFR | MPN | Description |
| --------------- | --------- | -------------------------------------------------- |
| TE Connectivity | 1318772-2 | Automotive Connectors 025 I/O CAP HSG ASSY, 12 Pin |
| PIN # | NAME | I/O | Description |
| ----- | ------ | ----- | --------------- |
......@@ -128,7 +128,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
| 11 | CANL-3 | INOUT | Channel 3, CANL |
| 12 | GND | PWR | Ground |
6. GPS PPS / GPRMC Output Rectangular Port
6. GPS PPS / GPRMC Output Rectangular Port
The Connector provides 8 ports for 3 LiDARs
......@@ -151,7 +151,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
| 6 | GND | PWR | Ground (ASU) -> Pin3 GND (LiDAR 1,3) |
| 7 | GND | PWR | Ground (ASU) -> Pin3 GND (LiDAR 2) |
| 8 | PPS | OUT | PPS (ASU) -> Pin1 GPS_PULSE_CNT (LiDAR 3)|
7. PPS/GPRMC Cylindrical Output Port for Stereo Camera/ LiDAR
......@@ -164,7 +164,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
| --------------- | --------- | -------------------------------------------------- |
| Digi-Key | APC1735-ND | CONN RCPT FMALE 8POS SOLDER CUP |
| PIN # | NAME | I/O | Description |
| ----- | ------ | ----- | --------------- |
......@@ -174,7 +174,7 @@ The Apollo Sensor Unit is currently only provided to our Partners and certain de
## Disclaimer
This device is `Apollo Platform Supported`
\ No newline at end of file
# Guide for LI-USB30-AZ023WDRB
The cameras used are LI-USB30-AR023ZWDR with standard USB 3.0 case manufactured by Leopard Imaging Inc. This line of product is based on AZ023Z 1080P sensor and AP0202 ISP from ON Semiconductor. It supports external trigger and software trigger.
The cameras used are LI-USB30-AR023ZWDR with standard USB 3.0 case manufactured by Leopard Imaging Inc. This line of product is based on AZ023Z 1080P sensor and AP0202 ISP from ON Semiconductor. It supports external trigger and software trigger.
We recommend using two cameras with 6 mm lens and one with 25 mm lens to achieve the required performance for traffic light detection application.
We recommend using two cameras with 6 mm lens and one with 25 mm lens to achieve the required performance for traffic light detection application.
![camera_image](images/LI-USB30-AZ023ZWDRB.png)
......@@ -12,7 +12,7 @@ This camera can be connected to the IPC through USB 3.0 cable for power and data
You can find additional information regarding the Leopard Imaging Inc. cameras on their [website](https://leopardimaging.com/product/li-usb30-ar023zwdrb/)
* [Data Sheet](https://www.leopardimaging.com/LI-USB30-AR023ZWDRB_datasheet.pdf)
* [Data Sheet](https://www.leopardimaging.com/LI-USB30-AR023ZWDRB_datasheet.pdf)
* [Trigger Cable](https://leopardimaging.com/product/li-usb3-trig_cable/)
## Disclaimer
......
# Guide for Argus Camera
Argus camera is a joint development venture product of Truly Seminconductors Ltd. and Baidu. The Argus camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on ON Semiconductor MARS.
Argus camera is a joint development venture product of Truly Seminconductors Ltd. and Baidu. The Argus camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on ON Semiconductor MARS.
We recommend using ```three cameras```, one with **6 mm** lens, one with **12 mm** lens and the last one with **2.33 mm** to achieve the required performance for the traffic light detection application.
We recommend using ```three cameras```, one with **6 mm** lens, one with **12 mm** lens and the last one with **2.33 mm** to achieve the required performance for the traffic light detection application.
![camera_image](images/Argus_pic.png)
......
# Guide for Wissen Camera
Wissen's camera is a joint development venture product of Wissen Technologies and Baidu. This line of camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on AR230 sensor (1080P) and AP0202 ISP from ON Semiconductor.
Wissen's camera is a joint development venture product of Wissen Technologies and Baidu. This line of camera features high dynamic range (HDR 120dB), internal/external trigger and OTA firmware update. It is well supported by the Apollo Sensor Unit. This line of product is based on AR230 sensor (1080P) and AP0202 ISP from ON Semiconductor.
We recommend using ```three cameras```, two with **6 mm** lens and one with **25 mm** lens to achieve the required performance for the traffic light detection application.
We recommend using ```three cameras```, two with **6 mm** lens and one with **25 mm** lens to achieve the required performance for the traffic light detection application.
![images](images/Wissen_pic.png)
......
......@@ -19,7 +19,7 @@ The simulation platform allows users to choose different road types, obstacles,
The simulation platform gives users a complete setup to run multiple scenarios parallelly in the cloud and verify modules in the Apollo environment.
3. **Automatic Grading System:**
The current Automatic Grading System tests via 12 metrics:
The current Automatic Grading System tests via 12 metrics:
- Collision detection
- Red-light violation detection
- Speeding detection
......@@ -43,7 +43,7 @@ The current Automatic Grading System tests via 12 metrics:
Through Dreamland, you could run millions of scenarios on the Apollo platform, but broadly speaking, there are two types of scenarios:
1. **Worldsim:**
Worldsim is synthetic data created manually with specific and well-defined obstacle behavior and traffic light status. They are simple yet effective for testing the autonomous car in a well-defined environment. They do however lack the complexity found in real-world traffic conditions.
Worldsim is synthetic data created manually with specific and well-defined obstacle behavior and traffic light status. They are simple yet effective for testing the autonomous car in a well-defined environment. They do however lack the complexity found in real-world traffic conditions.
2. **Logsim:**
Logsim is extracted from real world data using our sensors. They are more realistic but also less deterministic. The obstacles perceived may be fuzzy and the traffic conditions are more complicated.
......@@ -51,7 +51,7 @@ Logsim is extracted from real world data using our sensors. They are more realis
## Key Features
1. **Web Based:** Dreamland does not require you to download large packages or heavy software, it is a web based tool that can be accessed from any browser-friendly device
2. **Highly Customizable Scenarios:** With a comprehensive list of traffic elements you can fine tune Dreamland to suit your niche development.
2. **Highly Customizable Scenarios:** With a comprehensive list of traffic elements you can fine tune Dreamland to suit your niche development.
3. **Rigorous Grading Metrics:** The grading metrics include:
- Collision detection - Checks whether there is a collision (any distance between objects less than 0.1m is considered a collision)
......@@ -77,8 +77,8 @@ Logsim is extracted from real world data using our sensors. They are more realis
3. Upon successful logging in, you will be redirected to the Dreamland Introduction page which includes a basic introduction and offerings
![](images/Dreamland_home.png)
Dreamland platform offers a number of features that you could explore to help you accelerate your autonomous driving testing and deployment.
1. **User Manual** - This section includes documentation to help you get up and running with Dreamland.
Dreamland platform offers a number of features that you could explore to help you accelerate your autonomous driving testing and deployment.
1. **User Manual** - This section includes documentation to help you get up and running with Dreamland.
- [Quickstart](https://azure.apollo.auto/user-manual/quick-start): This section will walk you through testing your build using our APIs and also how to manage and edit existing scenarios.
- [Scenario Editor](): The scenario editor is a new feature to be launched in Apollo 5.0 which enables our developers to create their own scenarios to test niche aspects of their algorithm. In order to use this feature, you will have to comeplete the form on the screen as seen in the image below:
......@@ -103,7 +103,7 @@ Dreamland platform offers a number of features that you could explore to help yo
4. **Task Management:** Like Scenario Editor, Task Management is also a service offering currently in beta testing and open only to selective partners. In order to use this feature, you will have to comeplete the form on the screen and request for activation.
The Task Management tab is extremely useful when testing any one particular type of scenario like side pass or U-turns. It helps test your algorithms against very specific test cases.
Within the Task Management page, you can run a `New Task` to test your personal Apollo github repository against a list of scenarios. You will receive a summary of the task which highlights if the build passed or not, along with the passing rate of both worldsim and logsim scenarios and finally the total miles tested virtually. You can also view the number of failed scenarios along with a description detailing the failed timestamp and the grading metric which failed. Finally, you can run the comparison tool to check how your build performed versus previous builds.
Within the Task Management page, you can run a `New Task` to test your personal Apollo github repository against a list of scenarios. You will receive a summary of the task which highlights if the build passed or not, along with the passing rate of both worldsim and logsim scenarios and finally the total miles tested virtually. You can also view the number of failed scenarios along with a description detailing the failed timestamp and the grading metric which failed. Finally, you can run the comparison tool to check how your build performed versus previous builds.
5. **Daily Build:** The Daily Build shows how well the current Apollo official Github repository runs against all the scenarios. It is run once every morning Pacific Time.
......
......@@ -14,7 +14,7 @@ Peripherals
Unit: millimeter (mm)
Origin: The center of the rear wheel Axle
Origin: The center of the rear wheel Axle
......@@ -37,7 +37,7 @@ One camera with 6mm-lens should face the front of ego-vehicle. The front-facing
**Figure 3. Example setup of cameras**
After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file.
After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file.
#### Verification of camera Setups
The orientation of all three cameras should be all zeros. When the camera is installed, it is required to record a rosbag by driving a straight highway. By the replay of rosbag, the camera orientation should be re-adjusted to have pitch, yaw, and roll angles to be zero degree. When the camera is correctly installed, the horizon should be at the half of image width and not tilted. The vanishing point should be also at the center of the image. Please see the image below for the ideal camera setup.
......@@ -46,7 +46,7 @@ The orientation of all three cameras should be all zeros. When the camera is ins
**Figure 4. An example of an image after camera installation. The horizon should be at the half of image height and not tilted. The vanishing point should be also at the center of the image. The red lines show the center of the width and the height of the image.**
The example of estimated translation parameters is shown below.
The example of estimated translation parameters is shown below.
```
header:
seq: 0
......@@ -61,9 +61,9 @@ transform:
y: -0.5
z: 0.5
w: -0.5
translation:
translation:
x: 1.895
y: -0.235
z: 1.256
z: 1.256
```
If angles are not zero, they need to be calibrated and represented in quarternion (see above stransformation->rotation).
......@@ -14,7 +14,7 @@ Peripherals
Unit: millimeter (mm)
Origin: The center of the rear wheel Axle
Origin: The center of the rear wheel Axle
......@@ -37,7 +37,7 @@ One camera with 6mm-lens should face the front of ego-vehicle. The front-facing
**Figure 3. Example setup of cameras**
After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file.
After installation of cameras, The physical x, y, z location of camera w.r.t. origin should be saved in the calibration file.
#### Verification of camera Setups
The orientation of all three cameras should be all zeros. When the camera is installed, it is required to record a rosbag by driving a straight highway. By the replay of rosbag, the camera orientation should be re-adjusted to have pitch, yaw, and roll angles to be zero degree. When the camera is correctly installed, the horizon should be at the half of image height and not tilted. The vanishing point should be also at the center of the image. Please see the image below for the ideal camera setup.
......@@ -46,7 +46,7 @@ The orientation of all three cameras should be all zeros. When the camera is ins
**Figure 4. An example of an image after camera installation. The horizon should be at the half of image height and not tilted. The vanishing point should be also at the center of the image. The red lines show the center of the width and the height of the image.**
The example of estimated translation parameters is shown below.
The example of estimated translation parameters is shown below.
```
header:
seq: 0
......@@ -61,9 +61,9 @@ transform:
y: -0.5
z: 0.5
w: -0.5
translation:
translation:
x: 1.895
y: -0.235
z: 1.256
z: 1.256
```
If angles are not zero, they need to be calibrated and represented in quaternion (see above transform->rotation).
......@@ -6,7 +6,7 @@ Pandora is an all-in-one sensor kit for environmental sensing for self-driving c
#### Mounting
A customized mounting structure is required to successfully mount a Pandora kit on top of a vehicle. This structure must provide rigid support to the LiDAR system while raising the LiDAR to a certain height above the ground under driving conditions. This height should prevent the laser beams from the LiDAR being blocked by the front and/or rear of the vehicle. The actual height needed for the LiDAR depends on the design of the vehicle and the mounting point of the LiDAR relative to the vehicle. While planning the mounting height and angle, please read through the manual for additional details.
A customized mounting structure is required to successfully mount a Pandora kit on top of a vehicle. This structure must provide rigid support to the LiDAR system while raising the LiDAR to a certain height above the ground under driving conditions. This height should prevent the laser beams from the LiDAR being blocked by the front and/or rear of the vehicle. The actual height needed for the LiDAR depends on the design of the vehicle and the mounting point of the LiDAR relative to the vehicle. While planning the mounting height and angle, please read through the manual for additional details.
```
If for some reason, the LiDAR beam has to be blocked by the vehicle, it might be necessary to apply a filter to remove these points while processing the data received.
......@@ -14,7 +14,7 @@ If for some reason, the LiDAR beam has to be blocked by the vehicle, it might be
#### Wiring
Each Pandora includes a cable connection box and a corresponding cable bundle to connect to the power supply, the computer (ethernet) and the GPS timesync source.
Each Pandora includes a cable connection box and a corresponding cable bundle to connect to the power supply, the computer (ethernet) and the GPS timesync source.
![LiDAR_Cable](images/pandora_cable.png)
......@@ -28,10 +28,10 @@ Each Pandora includes a cable connection box and a corresponding cable bundle to
3. **Connection to the GPS**
The Pandora kit requires the recommended minimum specific GPS/Transit data (GPRMC) and pulse per second (PPS) signal to synchronize to GPS time. A customized connection is needed to establish the communication between the GPS receiver and the LiDAR. Please read your GPS manual for information on how to collect the output of those signals.
The Pandora kit requires the recommended minimum specific GPS/Transit data (GPRMC) and pulse per second (PPS) signal to synchronize to GPS time. A customized connection is needed to establish the communication between the GPS receiver and the LiDAR. Please read your GPS manual for information on how to collect the output of those signals.
On the interface box, a GPS port (SM06B-SRSS-TB) is provided to send the GPS signals as an input to the LiDAR. The detailed pinout is shown in the image below.
On the interface box, a GPS port (SM06B-SRSS-TB) is provided to send the GPS signals as an input to the LiDAR. The detailed pinout is shown in the image below.
| Pin # | Input/output | Comment |
| ----- | ------------ | ----------------------------------------------- |
| 1 | Input | PPS signal (3.3V) |
......
......@@ -40,7 +40,7 @@ Once this stage is complete, the output is directly sent to the Control module t
![](images/os_step3.png)
## Use Cases
## Use Cases
Currently Open Space Planner is used for 2 parking scenarios in the planning stage namely:
......
......@@ -2,9 +2,9 @@
```
The ARS408 realized a broad field of view by two independent scans in conjunction with the high range functions
like Adaptive Cruise Control, Forward Collision Warning and Emergency Brake Assist can be easily implemented.
Its capability to detect stationary objects without the help of a camera system emphasizes its performance. The ARS408 is a best in class radar,
especially for the stationary target detection and separation.
like Adaptive Cruise Control, Forward Collision Warning and Emergency Brake Assist can be easily implemented.
Its capability to detect stationary objects without the help of a camera system emphasizes its performance. The ARS408 is a best in class radar,
especially for the stationary target detection and separation.
----Continental official website
```
......@@ -39,6 +39,5 @@ The following diagram contains the range of the ARS-408-21 Radar:
This device is `Apollo Platform Supported`
\ No newline at end of file
## Installation Guide of Racobit B01HC Radar
Racobit developed one Radar product with **60 degree FOV** and **150 m** detection range for autonomous driving needs.
Racobit developed one Radar product with **60 degree FOV** and **150 m** detection range for autonomous driving needs.
![radar_image](images/b01hc.png)
......@@ -11,12 +11,11 @@ Racobit developed one Radar product with **60 degree FOV** and **150 m** detecti
3. Connect the power cable to **12VDC** power supply.
4. Connect the CAN output to the CAN interface of the IPC.
5. You should be able to receive the CAN messages through the CAN port once the Radar is powered.
6. Please discuss with the vendor for additional support if needed while integrating it with your vehicle.
6. Please discuss with the vendor for additional support if needed while integrating it with your vehicle.
## Disclaimer
This device is `Apollo Hardware Development Platform Supported`
\ No newline at end of file
......@@ -90,7 +90,7 @@ Before uploading your data, take a note of:
```
Origin Folder -> Task Folder ->Vehicle Folder -> Records + Configuration files
```
1. A **task** folder needs to be created for your calibration job, such as task001, task002...
1. A **task** folder needs to be created for your calibration job, such as task001, task002...
1. A vehicle folder needs to be created for your vehicle. The name of the folder should be the same as seen in Dreamview
1. Inside your folder, create a **Records** folder to hold the data
1. Store all the **Configuration files** along with the Records folder, within the **Vehicle** folder
......
# Dreamview Usage Table
Dreamview is a web application that,
1. visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc.
Dreamview is a web application that,
1. visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc.
2. provides human-machine interface for users to view hardware status, turn on/off of modules, and start the autonomous driving car.
3. provides debugging tools, such as PnC Monitor to efficiently track module issues.
......@@ -11,16 +11,16 @@ The application layout is divided into several regions: header, sidebar, main vi
### Header
The Header has 3 drop-downs that can be set as shown:
![](images/dreamview_usage_table/header.png)
![](images/dreamview_usage_table/header.png)
The Co-Driver switch is used to detect disengagement event automatically. Once detected, Dreamview will display a pop-up of the data recorder window for the co-driver to enter a new drive event.
Depending on the mode chosen from the mode selector, the corresponding modules and commands, defined in [hmi.conf](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/conf/hmi.conf), will be presented in the **Module Controller**, and **Quick Start**, respectively.
Note: navigation mode is for the purpose of the low-cost feature introduced in Apollo 2.5. Under this mode, Baidu (or Google) Map presents the absolute position of the ego-vehicle, while the main view has all objects and map elements presented in relative positions to the ego-vehicle.
Note: navigation mode is for the purpose of the low-cost feature introduced in Apollo 2.5. Under this mode, Baidu (or Google) Map presents the absolute position of the ego-vehicle, while the main view has all objects and map elements presented in relative positions to the ego-vehicle.
### Sidebar and Tool View
![](images/dreamview_usage_table/sidebar.png)
![](images/dreamview_usage_table/sidebar.png)
Sidebar panel controls what is displayed in the tool view described below:
### Tasks
......@@ -38,12 +38,12 @@ All the tasks that you could perform in DreamView:
* **Console**: monitor messages from the Apollo platform
### Module Controller
A panel to view the hardware status and turn the modules on/off
![](images/dreamview_usage_table/module_controller.png)
![](images/dreamview_usage_table/module_controller.png)
### Layer Menu
A toggle menu for visual elements displays.
![](images/dreamview_usage_table/layer_menu.png)
![](images/dreamview_usage_table/layer_menu.png)
### Route Editing
A visual tool to plan a route before sending the routing request to the Routing module
......@@ -51,23 +51,23 @@ A visual tool to plan a route before sending the routing request to the Routing
### Data Recorder
A panel to report issues to drive event topic ("/apollo/drive_event") to rosbag.
![](images/dreamview_usage_table/data_recorder.png)
![](images/dreamview_usage_table/data_recorder.png)
### Default Routing
List of predefined routes or single points, known as point of interest (POI).
List of predefined routes or single points, known as point of interest (POI).
![](images/dreamview_usage_table/default_routing.png)
![](images/dreamview_usage_table/default_routing.png)
If route editing is on, routing point(s) can be added visually on the map.
If route editing is on, routing point(s) can be added visually on the map.
If route editing is off, clicking a desired POI will send a routing request to the server. If the selected POI contains only a point, the start point of the routing request is the current position of the autonomous car; otherwise, the start position is the first point from the desired route.
To edit POIs, see [default_end_way_point.txt](https://github.com/ApolloAuto/apollo/blob/master/modules/map/data/demo/default_end_way_point.txt) file under the directory of the Map. For example, if the map selected from the map selector is "Demo", then [default_end_way_point.txt](https://github.com/ApolloAuto/apollo/blob/master/modules/map/data/demo/default_end_way_point.txt) is located under `modules/map/data/demo`.
### Main view:
### Main view:
Main view animated 3D computer graphics in a web browser.
![](images/dreamview_usage_table/mainview.png)
![](images/dreamview_usage_table/mainview.png)
Elements in the main view are listed in the table below:
......@@ -82,7 +82,7 @@ Elements in the main view are listed in the table below:
| ![](images/dreamview_usage_table/0clip_image038.png) |<ul><li> Nudge object decision -- the orange zone indicates the area to avoid </li></ul> |
| ![](images/dreamview_usage_table/0clip_image062.png) |<ul><li> The green thick curvy band indicates the planned trajectory </li></ul> |
#### Obstacles
| Visual Element | Depiction Explanation |
......@@ -124,8 +124,8 @@ When a yield decision is made based on the "Right of Way" laws at a stop-sign in
##### Stop reasons
When a STOP decision fence is shown, the reason to stop is displayed on the right side of the stop icon. Possible reasons and the corresponding icons are:
When a STOP decision fence is shown, the reason to stop is displayed on the right side of the stop icon. Possible reasons and the corresponding icons are:
| Visual Element | Depiction Explanation |
| ---------------------------------------- | ---------------------------------------- |
| ![](images/dreamview_usage_table/0clip_image040.png) | <ul><li> **Clear-zone in front** </li></ul>|
......@@ -181,13 +181,13 @@ The Planning/Control tab from the monitor plots various graphs to reflect the in
#### Customizable Graphs for Planning Module
[planning_internal.proto](https://github.com/ApolloAuto/apollo2/blob/master/modules/planning/proto/planning_internal.proto#L180) is a protobuf that stores debugging information, which is processed by dreamview server and send to dreamview client to help engineers debug. For users who want to plot their own graphs for new planning algorithms:
1. Fill in the information of your "chart" defined in planning_internal.proto.
2. X/Y axis: [**chart.proto** ](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto) has "Options" that you could set for axis which include
2. X/Y axis: [**chart.proto** ](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto) has "Options" that you could set for axis which include
* min/max: minimum/maximum number for the scale
* label_string: axis label
* legend_display: to show or hide a chart legend.
<img src="images/dreamview_usage_table/pncmonitor_options.png" width="600" height="300" />
3. Dataset:
* Type: each graph can have multiple lines, polygons, and/or car markers defined in [**chart.proto**](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto):
3. Dataset:
* Type: each graph can have multiple lines, polygons, and/or car markers defined in [**chart.proto**](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/proto/chart.proto):
* Line:
<img src="images/dreamview_usage_table/pncmonitor_line.png" width="600" height="300" />
......@@ -197,11 +197,11 @@ The Planning/Control tab from the monitor plots various graphs to reflect the in
* Car:
<img src="images/dreamview_usage_table/pncmonitor_car.png" width="600" height="300" />
* Label: each dataset must have a unique "Label" to each chart in order to help dreamview identify which dataset to update.
* Properties: for polygon and line, you can set styles. Dreamview uses **Chartjs.org** for graphs. Below are common ones:
| Name | Description | Example |
| Name | Description | Example |
| ----------- | --------------------------------------- | ----------------------- |
| color | The line color | rgba(27, 249, 105, 0.5) |
| borderWidth | The line width | 2 |
......
......@@ -7,7 +7,7 @@ Simulation is a vital part of autonomous driving especially in Apollo where most
The architecture diagram for how Dynamic model works is included below:
![](images/architecture.png)
The Control module recieves input via planning and the vehicle and uses it effectively to generate the output path which is then fed into the Dynamic model.
The Control module recieves input via planning and the vehicle and uses it effectively to generate the output path which is then fed into the Dynamic model.
## Examples
......@@ -19,14 +19,14 @@ The green lines in the graph below are the actual planning trajectories for thos
```
1. **Longitudinal Control**
A pedestrian walk across the road and the ego car needs to stop by applying the brake
A pedestrian walk across the road and the ego car needs to stop by applying the brake
![](images/Longitudinal.png)
2. **Lateral Control**
The ego car has to make a wide-angle U-turn in this scenario. As seen in the image below, the steering turn is at 64%. You can also monitor the performance of the dynamic model on the right against the actual planned trajectory.
3. **Backward Behavior**
3. **Backward Behavior**
The ego car has to park itself in a designated spot. This scenario is complex as it requires a mixture of forward and backward (reverse) driving and requires a high level of accuracy from the control module. As you can see in the image below, the steering turn required is at `-92%`. Additional details on this example can be seen in the planning module's Park scenario.
![](images/Backward.png)
......
......@@ -10,7 +10,7 @@ Apollo 3.0 introduced a production level solution for the low-cost, closed venue
* **Asynchronous sensor fusion**: unlike the previous version, Perception in Apollo 3.0 is capable of consolidating all the information and data points by asynchronously fusing LiDAR, Radar and Camera data. Such conditions allow for more comprehensive data capture and reflect more practical sensor environments.
* **Online pose estimation**: This new feature estimates the pose of an ego-vehicle for every single frame. This feature helps to drive through bumps or slopes on the road with more accurate 3D scene understanding.
* **Ultrasonic sensors**: Perception in Apollo 3.0 now works with ultrasonic sensors. The output can be used for Automated Emergency Brake (AEB) and vertical/perpendicular parking.
* **Whole lane line**: Unlike previous lane line segments, this whole lane line feature will provide more accurate and long range detection of lane lines.
* **Whole lane line**: Unlike previous lane line segments, this whole lane line feature will provide more accurate and long range detection of lane lines.
* **Visual localization**: Camera's are currently being tested to aide and enhance localization
* **16 beam LiDAR support**
......@@ -62,9 +62,9 @@ The lane can be represented by multiple sets of polylines such as next left lane
A CIPV is the closest vehicle in the ego-lane. An object is represented by 3D bounding box and its 2D projection from the top-down view localizes the object on the ground. Then, each object will be checked if it is in the ego-lane or not. Among the objects in our ego-lane, the closest one will be selected as a CIPV.
### Tailgating
Tailgating is a maneuver to follow the vehicle or object in front of the autonomous car. From the tracked objects and ego-vehicle motion, the trajectories of objects are estimated. This trajectory will guide how the objects are moving as a group on the road and the future trajectory can be predicted. There is two kinds of tailgating, the one is pure tailgating by following the specific car and the other is CIPV-guided tailgating, which the ego-vehicle follows the CIPV's trajectory when the no lane line is detected.
Tailgating is a maneuver to follow the vehicle or object in front of the autonomous car. From the tracked objects and ego-vehicle motion, the trajectories of objects are estimated. This trajectory will guide how the objects are moving as a group on the road and the future trajectory can be predicted. There is two kinds of tailgating, the one is pure tailgating by following the specific car and the other is CIPV-guided tailgating, which the ego-vehicle follows the CIPV's trajectory when the no lane line is detected.
The snapshot of visualization of the output is shown in the figure below:
The snapshot of visualization of the output is shown in the figure below:
![Image](images/perception_visualization_apollo_3.0.png)
The figure above depicts visualization of the Perception output in Apollo 3.0. The top left image shows image-based output. The bottom-left image shows the 3D bounding box of objects. Therefore, the left image shows 3-D top-down view of lane lines and objects. The CIPV is marked with a red bounding box. The yellow lines depicts the trajectory of each vehicle
......
......@@ -25,7 +25,7 @@ To learn more about individual sub-modules, please visit [Perception - Apollo 3.
### Supports PaddlePaddle
The Apollo platform's perception module actively depended on Caffe for its modelling, but will now support PaddlePaddle, an open source platform developed by Baidu to support its various deep learning projects.
The Apollo platform's perception module actively depended on Caffe for its modelling, but will now support PaddlePaddle, an open source platform developed by Baidu to support its various deep learning projects.
Some features include:
- **PCNNSeg**: Object detection from 128-channel lidar or a fusion of three 16-channel lidars using PaddlePaddle
- **PCameraDetector**: Object detection from a camera
......
......@@ -72,4 +72,4 @@
* [How to use Apollo 2.5 navigation mode](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md "[How to use Apollo 2.5 navigation mode")
* [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/dreamview_usage_table.md "Introduction of Dreamview")
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册