Localisation with Artificial Landmarks

Introduction

Localisation is a central problem in robotics and it is very relevant to the AVP project. A self-driving car that is looking for an empty parking slot must know where it is on a map. For a precise manoeuvre, such as parking, an equally precise map and localisation algorithm are required. 

The AVP project also has to respect a realistic budget for sensors, which rules out LiDARs in favour of cameras and IMUs. For this reason the project is committed to develop a vision-based localisation solution that uses HD Maps. Vision-based localisation however is very difficult and no one has yet demonstrated a system that works accurately and robustly in a fully general environment.

Within the International Standards Organisation, Technical Committee 204, Working Group 14, Parkopedia is part of a drafting team that is developing a standard for Automated Valet Parking Systems. The drafting team has agreed on the requirement for Artificial Landmarks, i.e. fiducial markers to be manually positioned in a carpark to enable accurate, robust localisation. At minimum, artificial landmarks are necessary around the pick-up and drop-off and zones to initialise the localisation system of a vehicle equipped with AVPS.

The next section will give an overview of localisation with landmarks.

Background to localisation with landmarks

The first step of localisation with landmarks consists of detecting the landmark with the available sensors. In this example we are using a camera, so we need to find the pixel coordinates of a landmark in an image. Note that we use interchangeably the terms landmark and marker.

The second step is to estimate the sensor position with respect to the landmark. In the case of a calibrated camera and of a marker with known size, a single image is sufficient to estimate the rigid transformation between the camera and the marker. The algorithm used is a variation of Perspective-n-Point.

The third and last step is to estimate the pose of the camera in map frame.  We know the position of the camera with respect to the marker from step 2. Provided that the marker is distinctively identifiable, we can find its position in the map. By chaining the two transformations, we obtain the desired pose of the camera in map frame.

Artificial Landmarks

There are many designs for artificial landmarks in literature.

Given the localisation process outlined in the previous section, we know some requirements for a good landmark. It must be easy to detect to facilitate the first step. It must be distinctive enough to be told apart from the other landmarks. And finally it has to be of a size and shape easy to handle in practice.

There is a convergence to a black and white square shape marker because its properties:

  • High contrast
  • Simple geometry
  • Easy to encode information

High contrast and square shape are clearly useful in detection because they can be exploited by established computer vision techniques. For example thresholding or line detection. 

The information encoding part has more degrees of freedom that can be used differently, keeping in mind the goal of having highly distinguishable landmarks.

The basis of all the approaches is to subdivide the marker in a grid of small squares and use a binary encoding. That is to assign a power of two to each square, that is selected or not based on whether the square is black of white. By using a 2×2 grid we can represent 16 values as shown in the image below.

If we were to use all the possible markers for a given grid size, we would have the problem of erroneous detections. Some markers are very similar and minor occlusions or image noise could confuse the detection process. There are mainly two ways to deal with this problem, both of them based on the idea of sacrificing some information to achieve greater safety. 

The first of these strategies is to set some squares to error detection. These squares convey no information on the id of the marker, but act as a necessary condition for correctness. The use of parity bits is widely studied in information theory and different schemes are available with different properties.

The second strategy is to maximize entropy, that is the distance between markers. Intuitively two markers with no square in common are far, while a marker is maximally close to himself. This notion is formalized by the concept of Hamming distance, which is also a widely studied topic in information theory.

The next two sections will analyze two marker types: ArUco and the standard proposed by the ISO drafting team.

ArUco markers

 ArUco markers are a state of the art fiducial marker system explicitly designed for localisation.

The information encoding is designed for optimal intramarker distance.  Possible markers are iteratively sampled and only the ones with a sufficiently large Hamming distance are selected.

ArUco is very flexible as it provides multiple dictionaries with different sizes. The authors of the paper provide a production grade implementation in OpenCV that also has Artificial Reality capabilities, very useful for debugging.

The following code snippet is an example of the use of a the Aruco library for localisation.

// Retrieve image of environment with Aruco marker
cv::Mat input_image = /* retrieve image from camera */;
// Initialise predefined dictionary DICT_4X4_250
auto dictionary = cv::aruco::getPredefinedDictionary(DICT_4X4_250);
// Initialize MarkerVector struct, output parameter of the detection function 
MarkerVector markers;
// Detect markers in the image
cv::aruco::detectMarkers(input_image, dictionary, markers.corners, markers.ids);
// Initialize camera intrinsics
cv::Mat K = /* camera matrix */;
cv::Mat D = /* distortion coefficients */;
// Set marker size
float marker_size = /* marker size including black border */
// Estimate marker pose
cv::aruco::estimatePoseSingleMarkers(markers.corners, marker_size, K, D,markers.rotations, markers.translations);

We provide a downloadable with the PDF version of the ArUco dictionary DICT_4X4_250. An A2 version, more suitable for printing, is also provided for convenience. It is common to print the markers on waterproof PVC  and mount them on 3mm plastic or aluminum.

ISO markers

The AVPS drafting team has chosen to use a custom definition for artificial landmarks that explicitly encodes the orientation, data bits and parity bits for error checking. This encoding can be seen in the image below. Rotation is encoded through the four corner squares, with the top left white and remaining 3 black. With the orientation fixed, 8 data and 4 parity bits are encoded with the remaining 12 bits to create a Hamming Code.

It is possible to create custom dictionaries in OpenCV to use with the ArUco library.  We have encoded the ISO dictionary as a custom one in a single header file which you can just include in your software.

Only one line of code from the previous example has to change – the creation of the dictionary – then the same code can be used to detect ISO markers.

auto dictionary = cv::makePtr<cv::aruco::Dictionary>(iso::generateISODictionary());

By using the library in this straightforward way, we see a lot of false positives, because we are not using the error correcting properties of the Hamming codes. A first step to improve the situation is to set the detector parameter errorCorrectionRate to zero, to disable the default correction done by ArUco. A better solution, that uses the full potential of the Hamming codes, requires to modify the ArUco detection algorithm.

As in the case of ArUco, we provide PDFs of the ISO dictionary as 30×30 cm squares and in A2 format.

Conclusion

A reasonable question to be asking right now is whether all this information above is even necessary. Can we not enable localisation without artificial landmarks?

It turns out that this is a difficult problem and industry is looking for a solution. The University of Surrey is developing a vision-based localisation algorithm that avoids the use of Artificial Landmarks as part of the AVP project and we look forward to demonstrate this technology on the AVP StreetDrone.

Expect to see a StreetDrone parking itself autonomously soon!

Publication of Safety Documents

We are delighted to be publishing today the Safety Documents created to support the safe testing of the Automated Valet Parking function using the StreetDrone test vehicle in a car park.

This is the result of months of effort by Connected Places Catapult and Parkopedia. Following the post on Systems Engineering these documents now support the case made to drive under autonomous control within a car park.

These safety documents encompass system safety and operational safety. 

– System safety:

– Operational Safety:

Note:

The Safety Plan and the Requirements are live documents (they have been updated for testing in a car park and will be updated for future milestones)

Any entity seeking to conduct autonomous vehicle trials will need to develop and publish a safety case specific to their own trials (as specified by the government’s Centre for Connected & Autonomous Vehicles (CCAV) Code of Practice for Automated Vehicle Trialling) and gain permission to do so.

Path planning using the OSM XML map

One of the main objectives of the AVP project is to create maps of car parks. Parkopedia is committed to working with our Open Source partners through the Autoware Foundation and have therefore released 3 maps of car parks to the community under the Creative Commons 4.0 BY-SA-NC license.

The maps are designed to be machine readable and are supplied in the OpenStreetMap XML format. This format is widely used and forms the basis for the OpenStreetMap mapsthat anyone can contribute to using tools such as the Java Open Street Map editor.

Our maps are designed to be useful for Automated Driving, which is why we’ve decided to make use of the Lanelet library as the data model for maps within the Autonomous Valet Parking prototype vehicle.

You can download the maps here and the following code can be used to plan a path using the lanelet library.

# libs
import lanelet2
import lanelet2.core as lncore

# load the map, for example autonomoustuff
osm_path = os.path.join(os.path.dirname(os.path.abspath('')), "AutonomouStuff_20191119_134123.osm")
print("using OSM: %s (exists? %s)" % (osm_path, os.path.exists(osm_path)))

# load map from origin
lorigin = lanelet2.io.Origin(37.3823636, -121.9091568, 0.0)
lmap = lanelet2.io.load(osm_path, lorigin)

# ... and traffic rules (Germany is the sole location, for now)
trafficRules = lanelet2.traffic_rules.create(lanelet2.traffic_rules.Locations.Germany, lanelet2.traffic_rules.Participants.Vehicle)
graph = lanelet2.routing.RoutingGraph(lmap, trafficRules)

# create routing graph, and select start lanelet and end lanelet for the shortest Path 
startLane = lmap.laneletLayer[2797] # lanelet IDs
endLane = lmap.laneletLayer[2938]
rt = graph.getRoute(startLane, endLane)
if rt is None:
    print("error: no route was calculated")
else:
    sp = rt.shortestPath()
    if sp is None:
        print ("error: no shortest path was calculated")
    else:
        print [l.id for l in sp.getRemainingLane(startLane)] if sp else None

# save the path in another OSM map with a special tag to highlight it
if sp:
    for llet in sp.getRemainingLane(startLane):
        lmap.laneletLayer[llet.id].attributes["shortestPath"] = "True"
    projector = lanelet2.projection.MercatorProjector(lorigin)
    sp_path = os.path.join(os.path.dirname(osm_path), os.path.basename(osm_path).split(".")[0] + "_shortestpath.osm")
    lanelet2.io.write(sp_path, lmap, projector)

# now display in JOSM both, and you can see the path generated over the JOSM map 
# Ctrl+F -->  type:relation "type"="lanelet" "shortestPath"="True"
# and the path will be highlighted as the image below

Happy path planning!

Halfway and a successful public demo!

The Autonomous Valet Parking project is a 30-month project funded by InnovateUK and the Centre for Connected and Autonomous Vehicles, due to end on 31 October 2020. We are five quarters in and a lot has been achieved:

  • We have created our first maps of car parks which we are trialling with customers,
  • The vision-based localisation algorithms are well advanced,
  • The stakeholder engagement work is complete,
  • The autonomous software to power our StreetDrone is ready to take out for a demo,
  • The safety case has ensured that we are in a position to demo this safely. 

At the CENEX-CAM event that took place at Millbrook Proving Ground UK 4-5 September 2019, we showed the progress made in the project at this halfway point. The main objective was to demonstrate that the Autoware-based software on the StreetDrone is able to control the vehicle by following waypoints consistently and accurately. The demonstration scenario consisted of three parts and reflects how we believe AVP will be used in real life. In the demo, you can see:

  1. The vehicle following a pre-defined set of waypoints to the designated parking spot, having been dropped off at a designated drop-off zone by its driver,
  2. The vehicle exiting the parking spot and driving to the pick-up zone (where the vehicle’s regular driver would collect it),
  3. A test of our automatic emergency braking, using the front-centre ultrasonic sensor on the vehicle.

This public demo was an important milestone for us to demonstrate our ability to control the vehicle using a PID controller for longitudinal (throttle) control and pure-pursuit for lateral (steering) control.

Localisation is done using the LiDAR with NDT matching. At this stage of the project we’ve limited the speed to 1m/s, this will double to 2m/s (5mph) in the future.

We are using the SAE convention for marking lamps, with green for manual control, blue when under autonomous control and red when an error state occurs. Adding the RGB LED lighting was done alongside the development work to enable switching between forwards and reverse in software while still in autonomous mode.

The safety case for the project combines operational and system safety. On the operational side we have a safety driver who can take over when a dangerous situation presents, and we also have system safety using the LiDAR and ultrasonic sensors, which will bring the vehicle to a stop to avoid driving into a hazard. We demonstrated Automated Emergency Braking using the ultrasonic sensor, following the testing and preparation done previously at Turweston.

Overall, the demo (done five times in two days) was well received, and we saw good levels of interest from delegates at the event, with lots of questions being asked about the project. It was a pleasure to speak to media and to delegates.

We’ve learned a lot from our friends at Ordnance Survey and we look forward to hosting them and others at the upcoming Autoware Hackathon. Many thanks to them of all the help and for storing our StreetDrone overnight under their gazebo!

Over the remaining 13 months of the project, we will be working on navigation and localisation using maps, with a final demonstration of the end solution due to take place in Autumn 2020.

Systems Engineering on AVP

You know how it can be a real challenge to understand the complexity in emerging multi-modal and automated transport systems?

By harnessing Systems Engineering best practices, our Systems Engineers at the Connected Places Catapult (CPC) use their expertise in systems requirements gathering, safety management, and verification and validationensuring that all aspects and interactions in the AVP system are clearly defined, integrated, evaluated and tested, thus improving our ability to deliver a safe, efficient and innovative system.  

Systems Engineering is an established practice capable of delivering technically complex systems, and Systems Engineers swear by the V lifecycle model, which shows the logical relationship between the different System Engineering activities.

In the V model, time and system maturity proceed from left to right:

System Engineering “V” diagram

In each stage of the system life cycle, the Systems Engineering process iterates to ensure that a concept or design is feasible, and that the stakeholders remain supportive of the solution as it evolves.

The process is applied sequentially, one level at a time, adding additional detail and definition with each level of development.

1. Concept Development

The concept development is the first step of the life cycle of the AVP project. This stage explores concepts, identifies stakeholders’ needs and proposes viable solutions.

It includes activities such as:

•        Concept of Operation (ConOps)

•        Systems Engineering activity Plan

•        Market Analysis

•        Competitor Assessment

•        Regulatory framework

2. Requirement Management

Requirements definitions are the key to success in the design and development of any complex system. Our Systems Engineers have carefully elicited requirements from all the project stakeholders to ensure the final product meets everyone’s needs, from technical feasibility to budget considerations and testability.

The total set of requirements encompasses the functional, performance, and non-functional requirements and the architectural constraints.

3. System Architecture

The architecture of a system is its fundamental structure. The purpose of drawing an architecture is to define a comprehensive solution based on principles, concepts, and properties logically related and consistent with each other. The architecture defines the system boundary and functions, from which more detailed system requirements can be derived.

This step of the cycle is focused on developing:

•        System Level Architecture

•        Functional Architecture

At the start of the AVP project, both of these architectures were drawn. The system level architecture refers more to the overall system and its operating environment. The functionalities that enable the AVP technology are mapped on the functional architecture, based on the Autoware ROS and showing modules such as perception, sensor fusion, path planning, and others.

4. System design and development

Once the initial opportunities and constraints have been identified in the steps above, it is time to create and describe in detail the AVP system to satisfy the identified needs.

While the Parkopedia Engineers work on developing the AVP system in accordance with the ConOps, Requirements and System Architecture, the Systems Engineers at the CPC focus on the functional system analysis and producing the Safety Case for the AVP testing.

This lifecycle step expands on ISO 26262, which addresses the functional safety in road vehicles. The analysis of the system level safety is carried through system-level functional analysis, such as the Failure Mode and Effect Analysis (FMEA) and the Hazard Risk Assessment (HARA) , which results in providing the system safety goals (translated as the safety requirements for the system). These form the system safety argument for the Safety Plan for the AVP project.
This step is crucial to ensure the risk is minimised if and when the system fails. Every malfunction leads to a warning and degradation strategy and the safety driver should be able to gain control of the vehicle.

Note: the system analysis also involves reviewing all of the previous deliverables as the design is being developed.

5. System Integration

The AVP system is made of different system elements provided by different parties:
– The vehicle is provided by StreetDrone
– The software, application and maps are provided by Parkopedia
– The localisation system is developed by the University of Surrey

This step ensures that all of the disparate system elements integrate to create an effective whole and allows the different teams to work in parallel confident that all of the pieces fit and work together.
The interface management activity relate to the steps taken to integrate the different architecture elements:

  • Reviewing and refining the previously mentioned deliverables (ConOps, System Architecture, Requirements, FMEA, HARA)
  • Assessing the criteria and methods for Requirements verification and validation (Analysis/Inspection/Test/Demonstration)
  • The Interface Control Document (ICD), which describes the integration strategy of the AVP system. This is a live document and is continuously updated until the end of the system development and testing.

6. Test & Evaluation

Prior to driving in the desired environment, the vehicle has to demonstrate that it is capable of safely navigating and responding to diverse situations and scenarios, such as emerging pedestrians, vehicles, obstacles, etc. By combining simulation and testing in a real environment (open field track and car parks), the AVP system can be validated.

This step focuses on ensuring that the developed system is operationally acceptable and that the responsibility for the effective, efficient, and safe operations of the system is transferred to the owner.

  • Can the system be tested, demonstrated, inspected, or analysed to show that it satisfies requirements?
  • Are the requirements stated precisely to facilitate specification of system test success criteria and requirements

Testing will always be performed with a highly trained safety driver, who will monitor the vehicle behavior at all times and ready to take over in the event of a failure. An engineer will monitor the path on a HMI, so that the safety driver is not distracted. The system must be able to detect and respond to a wide range of static obstacles in the testing environment.

Prior to testing, some activities need to be completed to ensure the safety of the AVP system:

Safety Plan

It examines the evidence required to show that acceptable safety has been achieved for any given testing activity, and how that evidence is to be collected, such that when all the evidence is taken together, there is a convincing argument that the project as a whole is safe.

Safety Case

It summarises the evidence collected prior to commencing Autonomous Valet Parking testing in accordance with the Safety Plan. There will be a Safety Case for each test phase: testing in a controlled environment (open field track), testing in car parks and the demonstration.

RAMS:

This document sets out the safety procedures that all participants are required to follow during testing of the autonomous control system and ensures the requirements and procedures are read and understood by all involved in the trial, and adhered to at all times. The Risk Assessment for the trial is also included in this document. Similarly to the Safety Case, each testing phase will be prepared with a RAMS.

After testing, the requirements are reviewed and given a pass/fail mark depending on the acceptance criteria set. In both cases, a justification needs to be provided.

Once again, the previous deliverables are reviewed and refined as required.

7.Operation and maintenance

The AVP system has now been tested in different test environments and settings. The system is ready to be deployed, and as part of this activity, several demonstrations are organised as evidence of the operational effectiveness of the deployed AVP system.

The Systems Engineers carry the below activities, verifying and validating the concept one last time:

  • RAMS
  • Safety Case close out
  • Verification and Validation close out
  • Review and refine as required previous deliverables
Systems Engineering Process Flow Diagram

8. Conclusion

The self-driving car industry is a fast-moving and uncertain environment, which makes safety the number one priority. This is also why the AVP project consortium is encouraging the industry to work together, collaborate, share use-cases and lessons learnt. Self-driving cars have the potential to ensure safety of passengers, reduce road congestion, drive smarter and in a more sustainable way. By implementing a robust Systems Engineering process, the self-driving car industry can mitigate risks and drive society safely into the future.

Autoware workshop @ Intelligent Vehicles Symposium 2019

IV2019 brings together researchers and practitioners from universities, industry and government agencies worldwide to share and discuss the latest advances in theory and technology related to intelligent vehicles.

The Autoware foundation is hosting a workshop on Sunday 9th June 2019, with the aim of discussing the current state of development of Autoware AI and Autoware Auto, and considering various technical directions that the Foundation is looking to pursue. Parkopedia’s contribution is around maps and specifically, the integration of indoor maps for Autonomous Valet Parking.

Parkopedia’s Angelo Mastroberardino will be presenting our work on maps, answering questions like “Why do we need these maps?”, “How do we represent geometry and road markings within maps?”, and naturally leading towards the question of how we use these maps for path planning within indoor car parks.

Later, Dr Brian Holt will be joining Tier 4, Apex.AI, Open Robotics, TRI-AD, and Intel on a panel to discuss Autoware and its impact on autonomous driving.

Parkopedia joined the Autoware Foundation as a premium founding member, because we believe in open source as a force multiplier to build amazing software. We are contributing maps, including for the AutonomouStuff car park in San Jose, USA, which you can download for use with your own self-driving car in simulation. Find out more

Car driver and stakeholder research

Connected Places Catapult (CPC) have conducted research with UK car drivers and stakeholders to better understand public and industry readiness for autonomous valet parking (AVP).

The key questions guiding the research were:

  • What are the key parking pain points that can be resolved by AVP?
  • What are other likely benefits of AVP to users and parking stakeholders?
  • What are the key barriers to AVP deployment and uptake, from a social and behavioural point of view?
  • What will be the likely impact of AVP, on the environment, the economy, and the parking industry?

The report produced details the findings, conclusions and recommendations from a suite of research activities:

  • A literature review to explore existing knowledge about the chosen research topics and questions;
  • Stakeholder interviews with parking professionals and OEMs to explore their views of AVP;
  • Focus group interviews to explore the needs and attitudes of drivers in-depth; and
  • A UK wide survey of 1025 car drivers to examine differences between user groups, and to gauge how common certain attitudes or needs are.

The research found that car drivers would be more receptive to the car taking control in a car park than on the roads and a technology solution that can reduce the stress of parking and make parking easier would be appealing. One in five drivers would like to use AVP now with a further 40% open to the idea but wanting to know more about it. Likely early adopters would be younger male drivers and those who have previously used driver assistance technology or have previously used a valet parking service.

Please see the full report of the research here.

First Autonomous Test

We successfully completed our first tests at Turweston Aerodrome last week.

Unloading the StreetDrone.ONE

The plan was to check and ensure the robustness of the drive-by-wire system, to train our safety drivers and to do basic path following.

We also took the opportunity to collect some data from the ultrasonic sensors that are on the StreetDrone.ONE which we will use for system safety.

Ultrasonic data collection
Drive-by-wire testing

For testing the drive-by-wire system, we carried out a number of test runs using teleoperation from the driver. We drove on the track forward, backward and changing steering at various speeds. The system performed satisfactorily. We also performed a full brake test to work out a safe driving speed and stopping distance in a case of emergency stop. Further details are presented in this blog post.

Path following with PID feedback control and a basic GPS and IMU localisation

On the final day, we tested a basic path following to make sure everything worked together. We integrated the drive-by-wire (dbw) system with a path follower, PID motion controller and a basic gps and imu localisation in this open space environment.

We managed to achieve the objective of testing the dbw with the feedback control for the path following. However, precision was not there as we expected. The basic imu and gps sensor localisation would not give very accurate positioning and tends to drift away or jump around to within 5m accuracy. To resolve this issue, we are working on a better localisation using RTK GPS (like a simpleRTK2B) using RTK corrections over NTRIP.

Our next test will focus on path following using the high quality localisation and we also hope to start with path planning within the open space. More updates will follow!

System Safety with Ultrasonic Sensors

Safety is the highest priority at the Autonomous Valet Parking project. As we seek to demonstrate that Parkopedia’s maps are suitable for localisation and navigation within covered car parks, our safety case must ensure the safety of the people, vehicles and infrastructure inside our test vehicle and without. There have been some highly publicised incidents involving other organisations’ autonomous vehicles over the past year or two, so what is the AVP project doing around safety?

Our Safety Case involves a combination of System Safety and Operational Safety, to achieve the required assurance levels around our activities. System Safety covers any and all aspects of the system (hardware, software or both) that contribute to the safety case and is the focus of this post. Operational Safety is all the operational decisions taken to ensure that we are conducting the tests safely and you can read more about that here.

Our test vehicle, a StreetDrone.ONE is a converted Twizy, and comes with ultrasound range sensors. There are eight Neobotix ultrasound sensors in total, three to the front, three to the rear, and one in each door looking sideways. The front sensors are slightly fanned out, and in the rear the outer two are corner mounted. The signals from each sensor are gathered together by a Neobotix board and this publishes the range data as an automotive industry canbus signal. These can be monitored and action taken when a significant measurement is made.

The sensors create a “virtual safety cage”. This virtual safety cage can be imagined as an invisible cuboid around the StreetDrone.ONE, slightly wider and longer than the car itself. If anything, be it a car, pedestrian or wall, intrudes inside this cage, the car should stop immediately, thus acting as a belt and braces to the perception and navigation parts of the AI driving the car.

The video below shows a demonstration of the drive-by-wire system of the StreetDrone at 14 mph. The brake is applied by the actuators at the very beginning of the 3 metres wide white strip. Based on a 14mph start speed, we calculated the braking distance to be 4.5 metres. This is obviously an approximation, and the actual braking distance and time depends on many factors including brake disk wear and tear.

Applying 100% brake at 14mph

Remembering our high-school equations of motion:

Where “v” is the initial velocity, “u” the final velocity, “a” is the acceleration and “s” the distance.

Now we can rearrange this to obtain the acceleration, remembering in our case the final velocity is zero:

These numbers match well with the similar experiments carried out by StreetDrone using IMU data, which have shown peak deceleration of 0.67g and an average deceleration of 0.46g.

The maximum range of the Neobotix ultrasound sensor is 1.5 metres, so we could do the above calculation in reverse and calculate the maximum safe speed. Allowing for a safety buffer of 0.5m:

The distance “s” is 1.0 metres, the acceleration “a” is 4.3m/s^2, and again the final velocity “u” is 0 m/s.

The present AVP plan calls for a maximum speed within car parks of 5 mph which is well within the safety margin calculated above. The next step now is to process the data we’ve captured from the ultrasonic sensors and to develop the software that will automatically apply the maximum brake whenever the virtual safety cage is breached.

We are ready to test!

On the track at Turweston Aerodrome

Exciting times: this week marks the start of testing for our StreetDrone autonomous vehicle, as we build towards an Autonomous Valet Parking demonstrator.

This first testing phase will be in a controlled environment to minimise risk. For that reason, we’ve chosen Turweston Flight Centre, which has previously been used by our friends at StreetDrone who have done some of their testing there.

In accordance with our project Safety Plan and the Safety Case for this phase of the project, we’ve also been busy collecting safety evidence prior to starting the live tests. These documents will be made publicly available as part of our goal to be as transparent as possible, and for those who wish to use them as a starting point for their own safety case.

For this phase, our safety evidence documents are:

  • Safety Plan
  • Safety Case Summary
  • Risk Assessment and Method Statement
  • Review of the Requirements
  • StreetDrone User Manual
  • Incident Reporting Spreadsheet

Stay tuned for our next update!

« Older posts

© 2020 AVP

Theme by Anders NorenUp ↑