diff --git a/content/en/docs/aws/authentication.md b/content/en/docs/aws/authentication.md index 2a787b1de6902f1c68e2aae63054fb9f2079d423..80af0c14773ff0db95a1be5ee7ba441eeb656879 100644 --- a/content/en/docs/aws/authentication.md +++ b/content/en/docs/aws/authentication.md @@ -15,9 +15,9 @@ In order to simply your setups, we highly recommend you to use this manifest. ## Traffic Flow External Traffic → [ Ingress → Istio ingress gateway → Istio virtual services ] -When you generate and apply kubernetes resources, an ingress is created to manage external traffic to Kubernetes services. The AWS Appliction Load Balancer(ALB) Ingress Controller will provision an Application Load balancer for that ingress. By default, TLS and authentication are not enabled at creation time. +When you generate and apply Kubernetes resources, an ingress is created to manage external traffic to Kubernetes services. The AWS Appliction Load Balancer(ALB) Ingress Controller will provision an Application Load balancer for that ingress. By default, TLS and authentication are not enabled at creation time. -Kubeflow uses [Istio](https://istio.io/) to manage internal traffic. In AWS solution, TLS, authentication can be done at the ALB and and authorization can be done at Istio layer. +Kubeflow uses [Istio](https://istio.io/) to manage internal traffic. In AWS solution, TLS, authentication can be done at the ALB and authorization can be done at Istio layer. ## Enable TLS and Authentication @@ -65,9 +65,9 @@ plugins: .... ``` -> Note: You can use your own domain for `cognitoUserPoolDomain`. In this case, we just use Amazon Coginito domain `kubeflow-testing`. If you use your own domain, please check [aws-e2e](/docs/aws/aws-e2e) for more details. +> Note: You can use your own domain for `cognitoUserPoolDomain`. In this case, we just use Amazon Cognito domain `kubeflow-testing`. If you use your own domain, please check [aws-e2e](/docs/aws/aws-e2e) for more details. -After you finish the TLS and Authentication configuration, then you can run `kfctl apply -V -f ${CONFIG_FILE}`. +After you finish the TLS and Authentication configuration, run this command: `kfctl apply -V -f ${CONFIG_FILE}`. After a while, your ALB will be ready, you can get ALB hostname by running follow command. @@ -83,7 +83,7 @@ Update your callback URLs. class="mt-3 mb-3 border border-info rounded"> -Then you can visit kubeflow dahsboard using your ALB hostname. +Then you can visit kubeflow dashboard using your ALB hostname. Cognito Authentication pop-up -In order to make Coginito to use custom domain name, A record is required to resolve `platform.domain.com` as root domain, which can be a Route53 Alias to the ALB as well. We can use abitrary ip here now, once we have ALB created, we will update the value later. +In order to make Cognito to use custom domain name, A record is required to resolve `platform.domain.com` as root domain, which can be a Route53 Alias to the ALB as well. We can use arbitrary ip here now, once we have ALB created, we will update the value later. If you're not using Route53, you can point that A record anywhere. diff --git a/content/en/docs/aws/deploy/install-kubeflow.md b/content/en/docs/aws/deploy/install-kubeflow.md index 885950ece5295f0c09531d95102833db10c44dff..9274f087a7127875d8a89f210bcfe0d56373ed44 100644 --- a/content/en/docs/aws/deploy/install-kubeflow.md +++ b/content/en/docs/aws/deploy/install-kubeflow.md @@ -83,7 +83,7 @@ Notes: * **${AWS_CLUSTER_NAME}** - The name of your eks cluster. This will be picked by `kfctl` and set value to `metadata.name`. - `alb-ingress-controller` requires correct value to provision application load balanders. + `alb-ingress-controller` requires correct value to provision application load balancers. Alb will be only created with correct cluster name. @@ -216,7 +216,7 @@ kubectl rollout restart deployment dex -n auth Kubeflow provides multi-tenancy support and user are not able to create notebooks in `kubeflow`, `default` namespace. -The first time you visit the cluster, you can ceate a namespace `anonymous` to use. If you want to create different users, you can create `Profile` and then `kubectl apply -f profile.yaml`. Profile controller will create new namespace and service account which is allowed to create notebook in that namespace. +The first time you visit the cluster, you can create a namespace `anonymous` to use. If you want to create different users, you can create `Profile` and then `kubectl apply -f profile.yaml`. Profile controller will create new namespace and service account which is allowed to create notebook in that namespace. ```yaml apiVersion: kubeflow.org/v1beta1 diff --git a/content/en/docs/aws/pipeline.md b/content/en/docs/aws/pipeline.md index e90ca9f92e2632f8c3a29ca2bad47e6c00a72b46..86a1e307e21491bf08411750d16839029e74dd5e 100644 --- a/content/en/docs/aws/pipeline.md +++ b/content/en/docs/aws/pipeline.md @@ -6,7 +6,7 @@ weight = 90 ## Authenticate Kubeflow Pipeline using SDK inside cluster -In v1.1.0, in-cluster communitation from notebook to Kubeflow Pipeline is not supported in this phase. In order to use `kfp` as previous, user needs to pass a cookie to KFP for communication as a walkaround. +In v1.1.0, in-cluster communication from notebook to Kubeflow Pipeline is not supported in this phase. In order to use `kfp` as previous, user needs to pass a cookie to KFP for communication as a walkaround. You can follow following steps to get cookie from your browser after you login Kubeflow. Following examples uses Chrome browser. > Note: You have to use images in [AWS Jupyter Notebook](/docs/aws/notebook-server) because it includes a critical SDK fix [here](https://github.com/kubeflow/pipelines/pull/4285). @@ -81,7 +81,7 @@ data: > Note: To get base64 string, run `echo -n $AWS_ACCESS_KEY_ID | base64` -## Configure containers to use AWS credentails +## Configure containers to use AWS credentials If you write any files to S3 in your application, use `use_aws_secret` to attach aws secret to access S3. @@ -108,7 +108,7 @@ def iris_pipeline(): ## Support S3 Artifact Store -Kubeflow Pipelines supports different artifact viewers. You can create files in S3 and reference them in output artifacts in your application like beflow. +Kubeflow Pipelines supports different artifact viewers. You can create files in S3 and reference them in output artifacts in your application as follows: ```python metadata = { @@ -145,7 +145,7 @@ In order for `ml-pipeline-ui` to read these artifacts: 1. Create a Kubernetes secret `aws-secret` in `kubeflow` namespace. Follow instructions [here](#s3-access-from-kubeflow-pipelines). -1. Update deployment `ml-pipeline-ui` to use AWS credential environment viariables by running `kubectl edit deployment ml-pipeline-ui -n kubeflow`. +1. Update deployment `ml-pipeline-ui` to use AWS credential environment variables by running `kubectl edit deployment ml-pipeline-ui -n kubeflow`. ``` apiVersion: extensions/v1beta1 diff --git a/content/en/docs/aws/rds.md b/content/en/docs/aws/rds.md index b032d39ca225e57426fa8d0376920e48f215e04e..dfaf221c96d5fab80a9a13966340711da40d2da9 100644 --- a/content/en/docs/aws/rds.md +++ b/content/en/docs/aws/rds.md @@ -45,7 +45,7 @@ We highly recommend deploying Multi-AZ database for Production. Please review RD [{{
}}](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=kubeflow-db&templateURL=https://cloudformation-kubeflow.s3-us-west-2.amazonaws.com/rds.yaml) -Remember to select correct **Region** in CloudFormation management console before clicking Next. We recommend you to change the **DBPassword**, if not it will dafault to `Kubefl0w`. Select VpcId, Subnets and SecurityGroupId before clicking Next. Take rest all defaults and click **Create Stack**. +Remember to select correct **Region** in CloudFormation management console before clicking Next. We recommend you to change the **DBPassword**, if not it will default to `Kubefl0w`. Select VpcId, Subnets and SecurityGroupId before clicking Next. Take rest all defaults and click **Create Stack**. Once the CloudFormation is completed, click on Outputs tab to get RDS endpoint. If you didn't use CloudFormation, you can retrieve RDS endpoint through AWS management console for RDS on the Connectivity & security tab under Endpoint & port section. We will use it in the next step while installing Kubeflow. @@ -71,7 +71,7 @@ Modify `${CONFIG_FILE}` file to add `external-mysql` in both pipeline and metada mysqlUser=<$DBUsername> mysqlPassword=<$DBPassword> ``` - Edit `params.env` file for the external-mysql metedata service (`kustomize/metadata/overlays/external-mysql/params.env`) and update values based on your configuration: + Edit `params.env` file for the external-mysql metadata service (`kustomize/metadata/overlays/external-mysql/params.env`) and update values based on your configuration: ``` MYSQL_HOST=external_host @@ -79,7 +79,7 @@ Modify `${CONFIG_FILE}` file to add `external-mysql` in both pipeline and metada MYSQL_PORT=3306 MYSQL_ALLOW_EMPTY_PASSWORD=true ``` - Edit `secrets.env` file for the external-mysql metedata service (`kustomize/metadata/overlays/external-mysql/secrets.env`) and update values based on your configuration: + Edit `secrets.env` file for the external-mysql metadata service (`kustomize/metadata/overlays/external-mysql/secrets.env`) and update values based on your configuration: ``` MYSQL_USERNAME=<$DBUsername>