Just run single command ( regex might require modifications )
g/\v^(#|$|;)/d
Just run single command ( regex might require modifications )
g/\v^(#|$|;)/d
Recently I was investigating how big my innodb buffer pool should be. While looking for more detailed info I came across this really handy mysql query which gives in result number of GB to which you should set your innodb_buffer_pool_size
SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM (SELECT SUM(data_length+index_length) Total_InnoDB_Bytes FROM information_schema.tables WHERE engine='InnoDB') A;
To query what is your current data usage you can use the following
SELECT (PagesData*PageSize)/POWER(1024,3) DataGB FROM (SELECT variable_value PagesData FROM information_schema.global_status WHERE variable_name='Innodb_buffer_pool_pages_data') A, (SELECT variable_value PageSize FROM information_schema.global_status WHERE variable_name='Innodb_page_size') B;
With this knowledge you should not have any problems with tuning your mysql 😉
In todays technological world it has become very popular ( and quite easy ) to create serverless architectures with Lambdas and expose them via API gateway.
The expose part is something which we could protect better. Solution provided here is basic blueprint which leverages openID ( in this case set up in Okta ).
This software/code is provided to you “as-is” and without warranty of any kind, express, implied or otherwise, including without limitation, any warranty of fitness for a particular purpose
Although I would like to keep the solution requirements to minimal this requires some software/services to work nicely together. Below I highlighted what we will need:
Setup of the whole solution involves several steps. I have outlined them below.
As a nice feature you could use claims to determine who can execute READ/WRITE actions
In order to verify that you can get tokens from the app you have just created you need to call one of Okta endpoints.
client_id:client_secret
echo "client_id:client_secret" | base64
Authorization: Basic <result-of-command>
your-okta-tenant-name , username and password
curl -X POST \ https://xxx.okta-emea.com/oauth2/default/v1/token \ -H 'Authorization: Basic MG9hMmQzN.........Q==' \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/x-www-form-urlencoded' \ -d 'username=username&password=Password1&grant_type=password&scope=openid'
{ "access_token": "eyJraW.....BvXdkU2Gg", "token_type": "Bearer", "expires_in": 3600, "scope": "openid", "id_token": "eyJr....yg" }
At this point of time you have fully working Okta openID app and can obtain tokens. Our next task is to obtain public key which will allow us in later stage to verify token signature.
There are many ways to obtain the key – and each of them can either take more time or involves you providing more information. My idea was simple – automate it as much as possible… therefore I came up with go-jwk-pem
(available in github ) . It is a simple CLI tool which takes either token or Okta server URL and retrieves public key which have been used to sign the JWT.
In this instance I will just use token from previous step
go-jwk-pem from-token --token eyJraW.....BvXdkU2Gg | /usr/bin/env ruby -e 'p ARGF.read'
Result of this command is single line public key , which is last piece of our puzzle which we need to make our solution working.
"-----BEGIN RSA PUBLIC KEY-----\nMIIBIjA........A4\nzTsuZ+eQLfhNbuA.....wWtcDsd+vMUlS7iJow\n2QIDAQAB\n-----END RSA PUBLIC KEY-----\n\n"
The time has come when we begin real fun 🙂 Let’s begin by cloning the solution from Github
One thats done we need to modify value in serverless.env.yml
dev: OKTA_PUBLIC_KEY: <PASTE-YOUR-PEM-HERE>
Since this functions are written in go we need to build them before deploying
> [SHELL] RafPe $ make build env GOOS=linux go build -ldflags="-s -w" -o bin/func1 func1/main.go env GOOS=linux go build -ldflags="-s -w" -o bin/auth auth/main.go
And now let’s deploy by using one of profile for AWS ( from credentials file ) running simple command
> [SHELL] RafPe $ sls deploy -s dev --aws-profile myAwsProfile --verbose
Output shows us details about our functions deployes ( this is minimal blueprint so your output can have more )
Serverless: Stack update finished... Service Information service: test-auth stage: dev region: eu-west-1 stack: test-auth-dev api keys: None endpoints: ANY - https://reiw2emcp3.execute-api.eu-west-1.amazonaws.com/dev/hello functions: func1: test-auth-dev-func1 auth: test-auth-dev-auth layers: None Stack Outputs AuthLambdaFunctionQualifiedArn: arn:aws:lambda:eu-west-1:123:function:test-auth-dev-auth:3 Func1LambdaFunctionQualifiedArn: arn:aws:lambda:eu-west-1:123:function:test-auth-dev-func1:3 ServiceEndpoint: https://reiw2emcp3.execute-api.eu-west-1.amazonaws.com/dev ServerlessDeploymentBucketName: test-auth-dev-serverlessdeploymentbucket-2g5ap50n5lwn
So right now we can immediately check if our setup works. Let’s start by trying to make simple http call
> [SHELL] RafPe $ http https://reiw2emcp3.execute-api.eu-west-1.amazonaws.com/dev/hello HTTP/1.1 401 Unauthorized Connection: keep-alive Content-Length: 26 Content-Type: application/json Date: Sat, 15 Dec 2018 11:01:24 GMT Via: 1.1 bce55e537f8dfcf0127f649d11fd1821.cloudfront.net (CloudFront) { "message": "Unauthorized" }
As expected we get Unauthorized message 😉 Let’s try to add token generated by Okta and make the call again
> [SHELL] RafPe $ http https://reiw2emcp3.execute-api.eu-west-1.amazonaws.com/dev/hello Authorization:'Bearer eyJraWQ....QHMi5ISw' HTTP/1.1 200 OK Connection: keep-alive Content-Length: 69 Content-Type: application/json Date: Sat, 15 Dec 2018 11:10:04 GMT Via: 1.1 94d63cbf92082237b86267ffd4cacc64.cloudfront.net (CloudFront) X-Cache: Miss from cloudfront X-MyCompany-Func-Reply: world-handler { "message": "Okay so your other function also executed successfully!" }
and voilla 😉 we have just created custom authorizer validating our Okta JWT.
Although this is just a blueprint it can be nicely extended. I would like to point out several items you might be interested about this
if you have any feedback – please leave comment or add your code into github repo 😉
Since started to work with AWS I sometimes hit the same problems more than one time 😉 One of those happen was when working with AWS Cognito – just needed to authenticate and get token – or just verify the user 😉 using command line. I honestly did not want to be bothered with any complexity to get simple tokens which I planned to use in accessing other systems etc.
For this purposes I have created simple CLI ( right now with just 2 methods ) to help me out in those situations. Usage is extremely simple you just need to have your AWS profile configured and have details of your AppClient from your user pool.
> [SHELL] RafPe $ go-cognito-authy --profile cloudy --region eu-central-1 auth --username rafpe --password 'Password.0ne!' --clientID 2jxxxiuui123 { AuthenticationResult: { AccessToken: "eyJraWQiOiJ0QXVBNmxtNngrYkxoSmZ", ExpiresIn: 3600, IdToken: "eyJraWQiOiJ0bHF2UElTV0pn", RefreshToken: "eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R-TpkR_uompG7fyajYeFvn-rJVC_tDO4pB3", TokenType: "Bearer" }, ChallengeParameters: {} }
which in return should give you response with tokens needed further in your adventures with AWS…. but if you would have user in state that a password needs to be changed 😉 ….
> [INSERT] RafPe $ go-cognito-authy --profile cloudy --region eu-central-1 auth --username rafpe --password 'Password.0ne!' --clientID 2jxxxiuui123 { ChallengeName: "NEW_PASSWORD_REQUIRED", ChallengeParameters: { requiredAttributes: "[]", userAttributes: "{\"email_verified\":\"true\",\"email\":\"[email protected]\"}", USER_ID_FOR_SRP: "rafpe" }, Session: "bCqSkLeoJR_ys...." }
With the session above and known challenge for new pass you can use it to set desired password
> [INSERT] RafPe $ go-cognito-authy --profile cloudy -region eu-central-1 admin reset-pass --username rafpe --pass-new 'Password.0ne2!' --clientID 2jxxxiuui123 --userPoolID eu-central-1_CWNnTiR0j --session "bCqSkLeoJR_ys...."
and voilla 😉 we can now continue playing with tokens
The whole solution is available on Github https://github.com/RafPe/go-cognito-authy/tree/master and if you are missing something please create a PR 😉
Sometimes when you work with projects on Github it takes you a bit more time than expected to prepare solution which you are happy to create PR for. In those cases it is good to be able to pull changes from upstream
git clone [email protected]:YOUR-USERNAME/YOUR-FORKED-REPO.git
cd into/cloned/fork-repo git remote add upstream git://github.com/ORIGINAL-DEV-USERNAME/REPO-YOU-FORKED-FROM.git git fetch upstream
git pull upstream master
From this moment you can continue your happy coding with changes
Hi,
This most likely can be first of several posts on tools and approach taken to automate tasks in Akamai. Before we look into specific toolset lets peak what is Akamai’s vision on automation
From what I have seen some of the features do work nicely and some of them are still in beta
or alpha. We will be focusing on Akamai CLI and extending it with plugin to manage network lists.
Akamai CLI is a tool which allows us to write plugin in most of common languages ( for me it will be Golang ) and then use it from console. Since the tool is well documented I will skip introducing it and send you off to documentation
Before you go ahead and write your own plugin you should decide on which client to choose ( or write your own ) which will take over communication with Akamai’s API.
For Golang Akamai have client which you can get here – however inspired by colleague of mine who wrote go-gitlab ( and not only ) I decided to make client a bit more robust and organised and came up ( as we engineers usually do 🙂 ) with alternative version.
This client can be found under https://github.com/RafPe/go-edgegrid
We start off by installing the plugin into Akamai’s CLI toolkit by running
akamai install https://github.com/RafPe/akamai-cli-netlist
which in return shows us the output similar to
From this point onwards we can use of all benefits of our new plugin. Just to give it a spin I will try explore just getting the lists …
Rest of them is well documented in repository page under https://github.com/RafPe/akamai-cli-netlist and from there I encourage you to explore the options you have for automation and let me know in comments did it work for you 🙂
My extension is now not the only one recently created – below is the list of other ones which you can make use of already
Akamai CLI for Netstorage https://github.com/partamonov/akamai-cli-netstorage
Akamai CLI for Siteshield https://github.com/partamonov/akamai-cli-siteshield
Akamai CLI for Firewall Rules Notifications https://github.com/partamonov/akamai-cli-frn
If you have came across the same error as I did ( error below ) then solving this might be easier than you think
TASK [Test credstash lookup plugin -- get the password with a context defined here] *********************************************************************************************************************************************************************************************** objc[6763]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[6763]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
You just need to run the following ( https://github.com/ansible/ansible/issues/31869 )
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
With this catchy post title I would like to start series of short technical blog posts how Cloudflare solutions can help out in solving challenges in our IT world ( at least the ones I came across ).
If you have not heard the name before then go ahead and check cloudflare.com and to find answer what Cloudflare is exactly look at their blog post 🙂 Before I start I would like to also let you know that this website does run on Cloudflare 🙂 but not all scenarios covered will be touching this property 😛 ( for obvious reasons )
My plan for the coming week ( or two ) would be to show you test case scenarios of the following :
With this cloud swiss army knife tool we should be able to build several case scenarios where we will see how using them can address our challenges. Now what is really cool about all of those – it is completely API driven … which means we would also get a chance to play around with “no-gui” … so the essence we all engineers like so much 🙂
Subscribe to not miss out on posts coming your way with all the goodies!
If there are some use case scenarios you would have and would like to see please leave your ideas in comments ( BTW – comments are moderated 🙂 )
Simple and quick one 🙂 Just to rewrite non www traffic to www in Apache you can use the following snippet
RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ %{REQUEST_SCHEME}://www.%{HTTP_HOST}$1 [R=301,L]
Hey,
I have not had so much time to do as many posts as I wanted since I started my recent project of my own app on AWS using serverless framework. As every good engineer while making my solution I looked at many open source options and serverless seemed to be really good … until I wanted to do something which no one seems to have been doing before ( every one of us know it right ? ) …
So what was that special thing ? Well nothing fancy – just wanted to “attach AWS API Gateway Basic Request Validation” … So I thought … how hard can it be 🙂
As everyone I used “google” to tell me who has done something like that before …. and thats how I visited issue related directly to my problem => https://github.com/serverless/serverless/issues/3464
Had a conversation there with all interested and we all seemed to agree that there is nothing that would work ( at that specific moment 🙂 )
Well by default ( at the moment of writing of this article ) serverless does not support this “out of the box” therefore this has kicked my off to create my own plugin. Since being developer is not my primary focus ( as I do it only as hobbyist 🙂 ) it was a bit of challenge. But it was completed with success. So what was needed to happen to create it ( if you want to skip the story just scroll down 🙂 ) ?
I started by looking into creating a form of “hello world” plugin that would help me to understand how to approach this in best way. Resources for this can be found on the official site of serverless
But I had a feeling this was quite incomplete to create fully functional plugin. So I spend quite a while browsing internet and reading how people created their own plugins from really simple ones – into more advanced which rocks&roll 😉 Here I think the resource that will show you more detailed steps can be found here
With that bits of knowledge I went to the official repo page of plugins and browsed through repositories which were there. This gave me better idea about what I needed to use.
Having all that knowledge I compiled my action plan which basically was:
This is how I managed to create my plugin available in github https://github.com/RafPe/serverless-reqvalidator-plugin and via npm:
npm install serverless-reqvalidator-plugin
All it does require is extremely simple we start off by creating custom resource in serverless.yml
xMyRequestValidator: Type: "AWS::ApiGateway::RequestValidator" Properties: Name: 'my-req-validator' RestApiId: Ref: ApiGatewayRestApi ValidateRequestBody: true ValidateRequestParameters: false
With that one done we add plugin to be enabled
plugins: - serverless-reqvalidator-plugin
And then in our functions we specify validator
debug: handler: apis/admin/debug/debug.debug timeout: 10 events: - http: path: admin/debug method: get cors: true private: true reqValidatorName: 'xMyRequestValidator'
Voilla 😉 And the code in the plugin that does the magic ? You would not believe how much of code that is ….
resources[methodName].Properties.RequestValidatorId = {"Ref": `${event.http.reqValidatorName}`};
And thats it folks 😉 Enjoy and happy coding 🙂