Hello!
It takes a few pieces to assemble a working lambda action for CodePipeline. I like to start from a simple example and build up to what I need. Here’s the code I use as a starting point.
First, a few notes:
- My pipeline lambda functions are usually small, often only a few dozen lines (more than that is usually a signal that I’m implementing an anti-pattern). Because my resources are small, I just drop the code into CloudFormation’s ZipFile. That saves me from building a package. In more complex cases you may want to expand your lambda function resource.
- One of the most common problems I see in lambda action development is unhandled exceptions. Read more in my article here.
- This example focuses on the minimum resources, permissions, and code for a healthy lambda action. I skipped some of the usual good practices like template descriptions and parameterized config.
- I put the S3 Bucket and CloudWatch Logs Log Group in the same template as the function so it was easy to see for this example. Usually I put them in a separate template because they don’t share the same lifecycle. I don’t want rollbacks or reprovisions to delete my artifacts or logs.
- My demo function doesn’t do anything with the pipeline artifact, it just logs the user parameter string passed to it. When I’m using custom actions like these it’s often for non-artifact tasks like passing notifications to outside systems and this is all I need.
- You’ll have to upload a file as
my_artifact
to the bucket this creates so the pipeline’s source action has something to pull. The bucket will be named for your account ID and region to prevent collisions with other people’s bucket (S3’s namespace is global to all AWS customers).
Now, the code:
--- AWSTemplateFormatVersion: '2010-09-09' Resources: PipelineBucket: Type: AWS::S3::Bucket Properties: BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: AES256 BucketName: !Sub '${AWS::AccountId}-${AWS::Region}-pipeline' PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true VersioningConfiguration: Status: Enabled LambdaLogs: Type: AWS::Logs::LogGroup Properties: LogGroupName: /aws/lambda/log-user-parameters RetentionInDays: 30 LambdaRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: '/' Policies: - PolicyName: execution-role PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - logs:CreateLogStream - logs:DescribeLogGroup - logs:PutLogEvents Resource: !GetAtt LambdaLogs.Arn - Effect: Allow Action: - codepipeline:PutJobFailureResult - codepipeline:PutJobSuccessResult # When this was written, CP's IAM policies required '*' for job results permissions. # https://docs.aws.amazon.com/IAM/latest/UserGuide/list_awscodepipeline.html#awscodepipeline-actions-as-permissions Resource: '*' PipelineRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - codepipeline.amazonaws.com Action: - sts:AssumeRole Path: '/' Policies: - PolicyName: actions PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - s3:Get* - s3:Put* - s3:ListBucket Resource: - !Sub - ${BucketArn}/* - BucketArn: !GetAtt PipelineBucket.Arn - !GetAtt PipelineBucket.Arn - Effect: Allow Action: - lambda:InvokeFunction # ARN manually constructed to avoid circular dependencies in CloudFormation. Resource: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:log-user-parameters' Function: Type: AWS::Lambda::Function Properties: Code: ZipFile: | # https://operatingops.com/2019/08/03/codepipeline-python-aws-lambda-functions-without-timeouts/ import json import logging import boto3 def lambda_handler(event, context): logger = logging.getLogger() logger.setLevel(logging.INFO) logger.debug(json.dumps(event)) codepipeline = boto3.client('codepipeline') s3 = boto3.client('s3') job_id = event['CodePipeline.job']['id'] try: user_parameters = event['CodePipeline.job']['data']['actionConfiguration']['configuration']['UserParameters'] logger.info(f'User parameters: {user_parameters}') response = codepipeline.put_job_success_result(jobId=job_id) logger.debug(response) except Exception as error: logger.exception(error) response = codepipeline.put_job_failure_result( jobId=job_id, failureDetails={ 'type': 'JobFailed', 'message': f'{error.__class__.__name__}: {str(error)}' } ) logger.debug(response) FunctionName: log-user-parameters Handler: index.lambda_handler Role: !GetAtt LambdaRole.Arn Runtime: python3.7 Timeout: 30 Pipeline: Type: AWS::CodePipeline::Pipeline Properties: ArtifactStore: Location: !Ref PipelineBucket Type: 'S3' Name: log-user-parameters RoleArn: !GetAtt PipelineRole.Arn Stages: - Name: Source Actions: - Name: Source ActionTypeId: Category: Source Owner: AWS Provider: 'S3' Version: '1' # Docs say 'Configuration' has to be JSON but you can use YAML. # CloudFormation will convert it to JSON. Configuration: S3Bucket: !Ref PipelineBucket S3ObjectKey: my_artifact PollForSourceChanges: false InputArtifacts: [] OutputArtifacts: - Name: Artifact Region: !Ref 'AWS::Region' - Name: LogUserData Actions: - Name: LogUserData ActionTypeId: Category: Invoke Owner: AWS Provider: Lambda Version: '1' # Docs say 'Configuration' has to be JSON but you can use YAML. # CloudFormation will convert it to JSON. Configuration: FunctionName: !Ref Function UserParameters: Hello! InputArtifacts: - Name: Artifact Region: !Ref 'AWS::Region' RunOrder: 1
This creates a pipeline:
With an action that logs our user parameters string:
With a few CloudFormation parameters and a little extra code in my function and this pattern almost always solves my problem. Hope it helps!
Happy automating,
Adam
Need more than just this article? We’re available to consult.
You might also want to check out these related articles: