未验证 提交 88305abb 编写于 作者: K Kelvin S. do Prado 提交者: GitHub

Fix typos AWS documentation (#2250)

* Fix typos AWS documentation

* Fix typos GKE documentation

* Fix typos IBM documentation

* Fix typos Azure documentation

* Fix typos components documentation

* Fix typos pipelines documentation

* Fix minor typos in the documentation

* Update content/en/docs/aws/pipeline.md
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>

* Update content/en/docs/aws/authentication.md
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>

* Update content/en/docs/aws/authentication.md
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>

* Update content/en/docs/aws/authentication.md
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>

* Update content/en/docs/aws/authentication.md
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>

* Revert commits not related to AWS documentation

* Update content/en/docs/aws/authentication.md
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>
Co-authored-by: N8bitmp3 <19637339+8bitmp3@users.noreply.github.com>
上级 ce7bc148
......@@ -15,9 +15,9 @@ In order to simply your setups, we highly recommend you to use this manifest.
## Traffic Flow
External Traffic → [ Ingress → Istio ingress gateway → Istio virtual services ]
When you generate and apply kubernetes resources, an ingress is created to manage external traffic to Kubernetes services. The AWS Appliction Load Balancer(ALB) Ingress Controller will provision an Application Load balancer for that ingress. By default, TLS and authentication are not enabled at creation time.
When you generate and apply Kubernetes resources, an ingress is created to manage external traffic to Kubernetes services. The AWS Appliction Load Balancer(ALB) Ingress Controller will provision an Application Load balancer for that ingress. By default, TLS and authentication are not enabled at creation time.
Kubeflow uses [Istio](https://istio.io/) to manage internal traffic. In AWS solution, TLS, authentication can be done at the ALB and and authorization can be done at Istio layer.
Kubeflow uses [Istio](https://istio.io/) to manage internal traffic. In AWS solution, TLS, authentication can be done at the ALB and authorization can be done at Istio layer.
## Enable TLS and Authentication
......@@ -65,9 +65,9 @@ plugins:
....
```
> Note: You can use your own domain for `cognitoUserPoolDomain`. In this case, we just use Amazon Coginito domain `kubeflow-testing`. If you use your own domain, please check [aws-e2e](/docs/aws/aws-e2e) for more details.
> Note: You can use your own domain for `cognitoUserPoolDomain`. In this case, we just use Amazon Cognito domain `kubeflow-testing`. If you use your own domain, please check [aws-e2e](/docs/aws/aws-e2e) for more details.
After you finish the TLS and Authentication configuration, then you can run `kfctl apply -V -f ${CONFIG_FILE}`.
After you finish the TLS and Authentication configuration, run this command: `kfctl apply -V -f ${CONFIG_FILE}`.
After a while, your ALB will be ready, you can get ALB hostname by running follow command.
......@@ -83,7 +83,7 @@ Update your callback URLs.
class="mt-3 mb-3 border border-info rounded">
Then you can visit kubeflow dahsboard using your ALB hostname.
Then you can visit kubeflow dashboard using your ALB hostname.
<img src="/docs/images/aws/authentication.png"
alt="Cognito Authentication pop-up"
......@@ -105,7 +105,7 @@ spec:
name: kubeflow-user@amazon.com
```
The `ServiceRole` `ns-access-istio` is created and it allows user to access all the services in that namespace. `ServiceRoleBinding` `owner-binding-istio` define subject like beflow. Only request with header `kubeflow-userid: kubeflow@amazon.com` can have pass istio RBAC and visit the service
The `ServiceRole` `ns-access-istio` is created and it allows user to access all the services in that namespace. `ServiceRoleBinding` `owner-binding-istio` define subject like below. Only request with header `kubeflow-userid: kubeflow@amazon.com` can have pass istio RBAC and visit the service
```yaml
subjects:
......@@ -113,8 +113,8 @@ subjects:
request.headers[kubeflow-userid]: kubeflow-user@amazon.com
```
After ALB load balancer authenticates a user successfully, it sends the user claims received from the IdP to the target. The load balancer signs the user claim so that applications can verify the signature and verify that the claims were sent by the load balancer. Applications that require the full user claims can use any standard JWT library to verify the JWT tokens.
After the ALB load balancer authenticates the user successfully, it sends the user claims received from the IdP to the target. The load balancer signs the user claim so that applications can verify the signature and verify that the claims were sent by the load balancer. Applications that require the full user claims can use any standard JWT library to verify the JWT tokens.
Header `x-amzn-oidc-data` stores user claims, in JSON web tokens (JWT) format. In order to create a `kubeflow-userid` header, we create [aws-istio-authz-adaptor](https://github.com/kubeflow/manifests/tree/master/aws/aws-istio-authz-adaptor) which is an isito [route directive adpater](https://istio.io/docs/tasks/policy-enforcement/control-headers/). It modifies traffic metadata using operation templates on the request and response headers. In this case, we decode JWT token `x-amzn-oidc-data` and retrieve user claim, then append a new header to user's requests.
The header called `x-amzn-oidc-data` stores user claims in JSON web tokens (JWT) format. In order to create a `kubeflow-userid` header, you should create [aws-istio-authz-adaptor](https://github.com/kubeflow/manifests/tree/master/aws/aws-istio-authz-adaptor), which is an Istio [route directive adapter](https://istio.io/docs/tasks/policy-enforcement/control-headers/). It modifies traffic metadata using operation templates on the request and response headers. In this case, you: 1) decode the JWT token - `x-amzn-oidc-data`; 2) retrieve the user claim; and 3) append the new header to user's requests.
Check [Enable multi-user authorization for AWS](https://github.com/kubeflow/kubeflow/issues/4761) for more technical details.
For more information, refer to [Enable multi-user authorization for AWS](https://github.com/kubeflow/kubeflow/issues/4761) issue on GitHub.
......@@ -161,7 +161,7 @@ Add namespace record, key should be the subdomain name `platform`, value is your
class="mt-3 mb-3 border border-info rounded">
In order to make Coginito to use custom domain name, A record is required to resolve `platform.domain.com` as root domain, which can be a Route53 Alias to the ALB as well. We can use abitrary ip here now, once we have ALB created, we will update the value later.
In order to make Cognito to use custom domain name, A record is required to resolve `platform.domain.com` as root domain, which can be a Route53 Alias to the ALB as well. We can use arbitrary ip here now, once we have ALB created, we will update the value later.
If you're not using Route53, you can point that A record anywhere.
......
......@@ -83,7 +83,7 @@ Notes:
* **${AWS_CLUSTER_NAME}** - The name of your eks cluster.
This will be picked by `kfctl` and set value to `metadata.name`.
`alb-ingress-controller` requires correct value to provision application load balanders.
`alb-ingress-controller` requires correct value to provision application load balancers.
Alb will be only created with correct cluster name.
......@@ -216,7 +216,7 @@ kubectl rollout restart deployment dex -n auth
Kubeflow provides multi-tenancy support and user are not able to create notebooks in `kubeflow`, `default` namespace.
The first time you visit the cluster, you can ceate a namespace `anonymous` to use. If you want to create different users, you can create `Profile` and then `kubectl apply -f profile.yaml`. Profile controller will create new namespace and service account which is allowed to create notebook in that namespace.
The first time you visit the cluster, you can create a namespace `anonymous` to use. If you want to create different users, you can create `Profile` and then `kubectl apply -f profile.yaml`. Profile controller will create new namespace and service account which is allowed to create notebook in that namespace.
```yaml
apiVersion: kubeflow.org/v1beta1
......
......@@ -6,7 +6,7 @@ weight = 90
## Authenticate Kubeflow Pipeline using SDK inside cluster
In v1.1.0, in-cluster communitation from notebook to Kubeflow Pipeline is not supported in this phase. In order to use `kfp` as previous, user needs to pass a cookie to KFP for communication as a walkaround.
In v1.1.0, in-cluster communication from notebook to Kubeflow Pipeline is not supported in this phase. In order to use `kfp` as previous, user needs to pass a cookie to KFP for communication as a walkaround.
You can follow following steps to get cookie from your browser after you login Kubeflow. Following examples uses Chrome browser.
> Note: You have to use images in [AWS Jupyter Notebook](/docs/aws/notebook-server) because it includes a critical SDK fix [here](https://github.com/kubeflow/pipelines/pull/4285).
......@@ -81,7 +81,7 @@ data:
> Note: To get base64 string, run `echo -n $AWS_ACCESS_KEY_ID | base64`
## Configure containers to use AWS credentails
## Configure containers to use AWS credentials
If you write any files to S3 in your application, use `use_aws_secret` to attach aws secret to access S3.
......@@ -108,7 +108,7 @@ def iris_pipeline():
## Support S3 Artifact Store
Kubeflow Pipelines supports different artifact viewers. You can create files in S3 and reference them in output artifacts in your application like beflow.
Kubeflow Pipelines supports different artifact viewers. You can create files in S3 and reference them in output artifacts in your application as follows:
```python
metadata = {
......@@ -145,7 +145,7 @@ In order for `ml-pipeline-ui` to read these artifacts:
1. Create a Kubernetes secret `aws-secret` in `kubeflow` namespace. Follow instructions [here](#s3-access-from-kubeflow-pipelines).
1. Update deployment `ml-pipeline-ui` to use AWS credential environment viariables by running `kubectl edit deployment ml-pipeline-ui -n kubeflow`.
1. Update deployment `ml-pipeline-ui` to use AWS credential environment variables by running `kubectl edit deployment ml-pipeline-ui -n kubeflow`.
```
apiVersion: extensions/v1beta1
......
......@@ -45,7 +45,7 @@ We highly recommend deploying Multi-AZ database for Production. Please review RD
[{{<figure src="/docs/images/aws/cloudformation-launch-stack.png">}}](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=kubeflow-db&templateURL=https://cloudformation-kubeflow.s3-us-west-2.amazonaws.com/rds.yaml)
Remember to select correct **Region** in CloudFormation management console before clicking Next. We recommend you to change the **DBPassword**, if not it will dafault to `Kubefl0w`. Select VpcId, Subnets and SecurityGroupId before clicking Next. Take rest all defaults and click **Create Stack**.
Remember to select correct **Region** in CloudFormation management console before clicking Next. We recommend you to change the **DBPassword**, if not it will default to `Kubefl0w`. Select VpcId, Subnets and SecurityGroupId before clicking Next. Take rest all defaults and click **Create Stack**.
Once the CloudFormation is completed, click on Outputs tab to get RDS endpoint. If you didn't use CloudFormation, you can retrieve RDS endpoint through AWS management console for RDS on the Connectivity & security tab under Endpoint & port section. We will use it in the next step while installing Kubeflow.
......@@ -71,7 +71,7 @@ Modify `${CONFIG_FILE}` file to add `external-mysql` in both pipeline and metada
mysqlUser=<$DBUsername>
mysqlPassword=<$DBPassword>
```
Edit `params.env` file for the external-mysql metedata service (`kustomize/metadata/overlays/external-mysql/params.env`) and update values based on your configuration:
Edit `params.env` file for the external-mysql metadata service (`kustomize/metadata/overlays/external-mysql/params.env`) and update values based on your configuration:
```
MYSQL_HOST=external_host
......@@ -79,7 +79,7 @@ Modify `${CONFIG_FILE}` file to add `external-mysql` in both pipeline and metada
MYSQL_PORT=3306
MYSQL_ALLOW_EMPTY_PASSWORD=true
```
Edit `secrets.env` file for the external-mysql metedata service (`kustomize/metadata/overlays/external-mysql/secrets.env`) and update values based on your configuration:
Edit `secrets.env` file for the external-mysql metadata service (`kustomize/metadata/overlays/external-mysql/secrets.env`) and update values based on your configuration:
```
MYSQL_USERNAME=<$DBUsername>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册