最佳实践 > cce集群使用jenkins持续发布
cce集群使用jenkins持续发布
  • 概览
  • 操作步骤
  • 1. 设置 jenkins 存储目录
  • 2. 部署 jenkins server 到 kubernetes
  • 3. 初始化配置 jenkins
  • 4. jenkins 安装 kubernetes plugin 插件
  • 5. 非集群内的 jenkins 连接 kubernetes 集群
  • 6. 测试并验证

cce集群使用jenkins持续发布-奇异果体育app竞彩官网下载

更新时间:

概览

持续构建与发布是企业研发工作中必不可少的一个步骤,目前大多公司都采用 jenkins 集群来搭建符合需求的 ci/cd 流程,jenkins的持续发布流程可以跟kubernetes集群更好的对接,更好的发挥它的部署优势,这篇文档可以指定用户把jenkins发布流程跟cce集群集成。

操作步骤

1. 设置 jenkins 存储目录

kubenetes 环境下所起的应用都是一个个docker镜像,为了保证应用重启的情况下数据安全,所以需要将jenkins的数据目录持久化到存储中。这里用的是cce提供的多种持久化存储之一,方便在kubernetes环境下应用启动节点转义数据一致。当然也可以选择存储到本地,但是为了保证应用数据一致,需要设置jenkins固定到某一 kubernetes节点。

参考 容器引擎cce-操作指南-存储管理 一节。

选取任意一种方式部署生成pvc,记录下pvc的名字。

2. 部署 jenkins server 到 kubernetes

service-account.yaml ``` --- apiversion: v1 kind: serviceaccount metadata: name: jenkins

--- kind: role apiversion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules:

  • apigroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"]
  • apigroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"]
  • apigroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"]
  • apigroups: [""] resources: ["events"] verbs: ["watch"]
  • apigroups: [""] resources: ["secrets"] verbs: ["get"]

    --- apiversion: rbac.authorization.k8s.io/v1beta1 kind: rolebinding metadata: name: jenkins roleref: apigroup: rbac.authorization.k8s.io kind: role name: jenkins subjects:

  • kind: serviceaccount name: jenkins ``` jenkins.yaml

# jenkins
 
---
apiversion: apps/v1
kind: statefulset
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  selector:
    matchlabels:
      name: jenkins
  servicename: jenkins
  replicas: 1
  updatestrategy:
    type: rollingupdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationgraceperiodseconds: 10
      serviceaccountname: jenkins
      containers:
        - name: jenkins
          image: hub.baidubce.com/jpaas-public/jenkins-github:v0
          imagepullpolicy: always
          ports:
            - containerport: 8080
            - containerport: 50000
          resources:
            limits:
              cpu: 1
              memory: 1gi
            requests:
              cpu: 0.5
              memory: 500mi
          env:
            - name: limits_memory
              valuefrom:
                resourcefieldref:
                  resource: limits.memory
                  divisor: 1mi
            - name: java_opts
              # value: -xx: unlockexperimentalvmoptions -xx: usecgroupmemorylimitforheap -xx:maxramfraction=1 -xshowsettings:vm -dhudson.slaves.nodeprovisioner.initialdelay=0 -dhudson.slaves.nodeprovisioner.margin=50 -dhudson.slaves.nodeprovisioner.margin0=0.85
              value: -xmx$(limits_memory)m -xshowsettings:vm -dhudson.slaves.nodeprovisioner.initialdelay=0 -dhudson.slaves.nodeprovisioner.margin=50 -dhudson.slaves.nodeprovisioner.margin0=0.85
          volumemounts:
            - name: jenkins-home
              mountpath: /var/jenkins_home
          livenessprobe:
            httpget:
              path: /login
              port: 8080
            initialdelayseconds: 60
            timeoutseconds: 5
            failurethreshold: 12 # ~2 minutes
          readinessprobe:
            httpget:
              path: /login
              port: 8080
            initialdelayseconds: 60
            timeoutseconds: 5
            failurethreshold: 12 # ~2 minutes
      securitycontext:
        fsgroup: 1000
      volumes:
        - name: jenkins-home
          persistentvolumeclaim:
            claimname: myjenkinspvc
 
---
apiversion: v1
kind: service
metadata:
  name: jenkins
spec:
  type: nodeport
  selector:
    name: jenkins
  # ensure the client ip is propagated to avoid the invalid crumb issue when using loadbalancer (k8s >=1.7)
  #externaltrafficpolicy: local
  ports:
    - name: http
      port: 80
      targetport: 8080
      protocol: tcp
    - name: agent
      port: 50000
      protocol: tcp

注意:

  • jenkins.yaml文件中的字段claimname需要改成1、设置jenkins存储目录中生成的pvc名字

cce kubernetes集群中执行以下命令 kubectl create -f service-account.yaml kubectl create -f jenkins.yaml

依次生成以下内容代表创建成功

3. 初始化配置 jenkins

此时jenkins master服务已经部署启动起来了,并且将端口暴露到80:30427,50000:31598,此时可以通过浏览器打开http://:30427访问 jenkins页面了。

在浏览器上完成jenkins的初始化插件安装过程,并配置管理员账户信息,这里忽略过程,初始化完成后界面如下:

注意:

  • 初始化过程中,让输入 /var/jenkins_home/secret/initialadminpassword初始密码时,可以直接到挂载的pvc持久化目录里面读取,或者直接进入到容器的内部进行获取

kubectl exec -it jenkins-0 cat /var/jenkins_home/secrets/initialadminpassword

4. jenkins 安装 kubernetes plugin 插件

管理员账户登录jenkins master页面,点击 "系统管理" -> "插件管理" —> "可选插件" —> "kubernetes" 勾选安装即可。

安装完毕后,点击"系统管理" —> "系统设置" —> "新增一个云" —> 选择 "kubernetes",然后填写kubernetesjenkins配置信息。

说明:

  • 1、name处默认为kubernetes,也可以修改为其他名称,如果这里修改了,下边在执行job时指定podtemplate()参数cloud为其对应名称,否则会找不到,cloud默认值取:kubernetes
  • 2、kubernetes url处填写了https://kubernetes.default这里填写了 kubernetes service对应的dns记录,通过该dns记录可以解析成该servicecluster ip

注意:
也可以填写https://kubernetes.default.svc.cluster.local完整dns记录,因为它要符合..svc.cluster.local的命名方式,或者直接填写外部kubernetes的地址https://:

  • 3、jenkins url处填写了http://jenkins.default,跟上边类似,也是使用 jenkins service对应的dns记录, 同时也可以用http://:方式,例如我这里可以填http://x.x.x.x:30427也是没有问题的,这里的30427就是对外暴露的nodeport
  • 4、配置完毕,可以点击"test connection"按钮测试是否能够连接的到 kubernetes,如果显示connection test successful则表示连接成功,配置没有问题

5. 非集群内的 jenkins 连接 kubernetes 集群

填写 kubernetes 配置内容

以一个kubeconfig文件为例 ``` apiversion: v1 clusters:

  • cluster: certificate-authority-data: ls0tls1crudjtibdrvjusuzjq0furs0tls0tck1jsurwrendqw95z0f3sujbz0lvuldsdmnwrkxnatvaufzjuvlll2o2wkxszljjd0rrwuplb1pjahzjtkfrruwkqlfbd2fqruxnqwthqtfvrujotunrmdr4rurbt0jntlzcqwduqjbkbgfvchbibwn4rurbt0jntlzcqwnuqjbkbaphvxbwym1jeerequtcz05wqkfvveeyczrjekvvtujjr0exvuvdee1mwtj4dmrxunvzwfjwzg1veev6qvjcz05wckjbtvrdbxqxww1wewjtvjbawe13sghjtk1qqxdoveu0turjee5qqxdxagnotwpvd05urtnnrgn4tmpbd1dqqnektvfzd0nrwurwuvfhrxdkrfrqrvfnqtrhqtfvrunctuhrbvzwu21sdvp6rvfnqtrhqtfvruj4tuhrbvzwu21sdqpaekvntufvr0exvuvdae1eyxpoek1suxdfz1levlfrtev3dgpirzkxwkc1agrhbdjavevutujfr0exvuvbee1lcmezvmlawep1wlhsbgn6q0nbu0l3rffzsktvwklodmnoqvffqkjrqurnz0vqqurdq0frb0nnz0vcqutunmnnmwykqzlgmtfvvg1jvfleqmzjmglbunj2n3rtuehdnu02nhbqnlvirvhiq2lmnefiuevhk29tdvzkwhz6mexhnvzragp2sjzublc0k2h4ujnut2pgsmhgbldvzdcrwutbm1fic05quvnybfnltvhqrvltsta2m3ngs1yznfznuerxr3byclvvvlzwzjduvmvky0fhrtdzefg3cxfoeerds3fjc2cxwctdnffrk01zaexkaudyrnnnrc8rohnuvkrzvzrotemkz01zz1r5wlvwrdlmm2hbtxq0dzduv3riwursoflvznhecu9tyndpvex3vlnyaxavzhlvew9bzxhxbffwaugrmwptnxy5cxrntxvrrjhumvnpukwzcldsmunyr0jpu3k0ytvbena0dux6ttlnbjjzdzziu3gysg1azlpxuli1sxdickfnqu9ydstfz1lxvmnooenbd0vbqwfoq01fqxdez1levliwuefrsc9cqvfeqwdfr01boedbmvvkrxdfqi93uuyktufnqkfmohdiuvlevliwt0jcwuvgsjbnoxlyc2f5sxezznvuykhwwhjizuzzl3rbtuewr0ntcudtswizrfffqgpdd1vbqtrjqkfrqke0uzbovkpyotbmnjnpu0trmzvrugnvte9obtg0tvnubki1ohe3aljhohbtrhvvc09tuxuwcjzpue9czvbmoe92eepobzhrchbxowjcr1n1zfrkmg5cunmytmfjwu1bm0fqeum0y09noffjngxhcujwqllvvecka0i0cvlsyjl4dvj2bnbjenvuwkd0rkvyqkpkzxfgzzzzsno2thrizzfiumm2cgjgzthuvuzzynrsmzrpegl2qgp5wunukznsbepxtmhxcjljk2djndhmexnhetbnl3bmq1lsszdxqm8rswfztxf1z2zmc2rqs2disku0tm0xei9pcmlsnnhymwv3r1rymefsmenkn2tneitha3hzalfhchy1ug5inzrpohzzmm5uodvhunlizvzjeenmzkhtazbtn2kkl0hjtxrkrjdxmve5bndqmlhiclmxrmrctevswgfgqtkkls0tls1ftkqgq0vsvelgsunbveutls0tlqo= server: name: kubernetes contexts:
  • context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: config preferences: {} users:
  • name: kubernetes-admin user: client-certificate-data: ls0tls1crudjtibdrvjususdafzjq0furs0tls0tcujbz0lvqlrmk3lwnxdnbjnvamdkdzrksmfba0rpntzrd0rrwuplb1pjahzjtkfrruwkqlfbd2fqruxnqwthqtfvrujotunrmdr4rurbt0jntlzcqwduqjbkbgfvchbibwn4rurbt0jntlzcqwnuqjbkbaphvxbwym1jeerequtcz05wqkfvveeyczrjekvvtujjr0exvuvdee1mwtj4dmrxunvzwfjwzg1veev6qvjcz05wckjbtvrdbxqxww1wewjtvjbawe13sghjtk1qqxdoveu0turjee5qqxdxagnotxpbd05urtjnrgn4tmpbd1dqqjcktvfzd0nrwurwuvfhrxdkrfrqrvfnqtrhqtfvrunctuhrbvzwu21sdvp6rvfnqtrhqtfvruj4tuhrbvzwu21sdqpaekvytujvr0exvuvdae1pyznsemrhvnrpbtfoyznsbgnutxhgreftqmdovkjbc1rdmk5zyjnwa2jtrjbhwfpsck1sa3dgd1levlfrrev4qnjkv0psy201bgrhvnpmv0zryldsdu1jsujjakfoqmdrcwhrauc5dzbcqvffrkfbt0mkqve4qu1jsujdz0tdqvffqterqjh2axjgu0ztqvjeqk4yowxzrhn2wdlxahjrtu82wnlck0e5bwnybutqzg5noqozdu80t3fgq1rjnktxtvndqstduhvkdlqwzxfjm29uuwzrmhzmww40undvnfhiz21ibwq2nk50sxlqzudzzgtgclrjc1fxaelzvm1vqs9znxbeb2h1yk9oqwlgbdfoaeliutnqu3jmmy9myzvly3rym2zosghxoe1hblpyykvmrusky1nsdvi5um4rqkdet1pubgnmztkzrwrxme9eefrsaxv6afnyektuquhyznrlbglis3hyvxvkrlfzcgphafgrtgplt2prtfazdkuwnjyzbklibgxzb2trykxpzuvnd3lkcdnwufphsxi2ae5kznd5bedpu2lhmmw0n045bg1lvvhocmt6mjzowg5oedzxrzdvl2d1axbldupwehzmt1nxs0thwmjyzmhrsurbuufcbzrise1jsevnqtrhqtfvzer3ruikl3drruf3suzvrefkqmdovkhtvuvgakfvqmdnckjnruzcuwneqvfzsut3wujcuvviqxdjd0rbwurwujbuqvfilwpcqul3qurbzejntlziutrfrmdrvvzuqm5esjjyrutcmkrrtljzy2c5sflsl2vcqxdid1levliwakjcz3dgb0fvcm5rejnldxhyswlyzcs2zhnkvmv0ddrwaiswqxdsuvlevliwukjenhdqswnfckjjqufzy0varurtsuljrvpfywskeuljrvpfrg1ingnfwkvha3dzy0varurtsg9jrvpfywt2b2nfwkvebulzy0vhz3hksm9jrvpfsufirefoqmdrcqpoa2lhoxcwqkfrc0zbqu9dqvffqunfc2jbowh6chp6ywdwl2rmemqremx2anjvq0dqnvfbqjnmqln0oen4z2locnhknzblewv5tddvvexozdrnd09tcjd0vgxhmtfmzte4vkrvrundv0nzy0fabxzek014evj2alfal1njy29xmgwkrkc5rflbu3dol2rhmuraety2l0fjdllawev1wmxqbgtsnnlrq1f4mhjnyjjtdvdsemdjts9mutnwsfvyenj4cqo3culreetjewlhzc8yuddvcfbtszmvemjiu2wyvnnnc1pfl2tjb1znq3byc1neaml2aephnulya1lyamnsa2jzcmv2cdi2b0m0k0lknknynjh0tdrotkfxnzformh1qtj1tghvlzfhc3noy0w4rfpfnnfvtlfpwuzxqjvhee81neskzxlxrkm3ddbtbevnytfrvnnlbklwtvltrmrlytzselhtn3lxsfvcvedbpt0kls0tls1ftkqgq0vsvelgsunbveutls0tlqo= client-key-data:ls0tls1crudjtibsu0egufjjvkfursblrvkbjrsdlr3nka0ykvvbl1k1cervahvit05bauzsmu5oswjrm2ptcmyzl0xjnutjdfgzzk5iahe4twfuwlhirwzfswpju2x1ujlsbitcr0rpwm5sy0xlotnfzhewt0r4vgxpdxpou1h6s1rbshjmdetsauhlehjvdwrguvlwakdowctocktpalfmudn2rta2njnuswjsbflva2titgllrwd3ewrwm1zqwkdjcjzotmrmd3lsr2ltaucybdq3tjlsbwvvwggka3oynmhybk54nldhn28vz3vpcet1slz4dmzpu1dls0daynjmaffjrefrqujbb0lcquzrvuhlr2zit2xynkffbqo0ehd1yvztmtllngvtrzljrerdz1noukx6q2ryuwi3n2txegzpdun5ny9vsddaowzzzgu2dlo5rutktehtdm1icnr3qu5nnktjzxzpr1ivwlq0vnpvsnqztwttt2jisdjxtgvjcs9wbu9ysefwdexztw9rukr4t1pwk0rkaxleruekq2xovuzasw9yqnqyrgk3oxdqos9hexfqdwewscttk0npv2jsyzhodkrszwtiznd2b2pxtwtznvnunkzhsmvznwpxqw1gnnvtaes2y3ovzjv5qjgvnjvmq0tvyvn2rflkdk1hvhz2u3bqrlbozvlschrnrevwd2vjmelarfbxddq4cmpnwklpqjvlbmpodxprc1l6eeuvwnrhv05ps3qxvmtkytduq1m2sitom0jesdfwoe9qvwk0zvpdvtrzd05krvmkvdrov0zzrunnwuvbogthr0y3agxmsehbzkwya3mwnujnb01mtktouffkskxyv01zuvk0l0fwywjlawhovzzrswpabdj1cjnorkfzz3b6zfp3rkuzbex5vmhnodzss05xnvzvswcwuw5qwepbc1u5vgt1kznzvxrqwnfhtknmdwrhclb1dg1qwfnwr1nvqwqxzddwqknkbedmywzbd1arrei3vkfcnhrtqklvd1nhy3hdb2puou92t0vdz1lfqtvcc2gkcnpothzjcu81mlpyl2xlz3bwrc96afhzwmnhadhqcfflnuy1vkjna3myalrnl25uvxdvyzl1qxllulyxbm1ltapnajlxtmi2qkhznxnrzxzanufpq2phwgxyl24zd29qsitttvpyqnllmvvqdjbvakdxcvlnchbxmuxhm21mrev5cmpgbdbhcnb5mllrskxok1vou3r0c2zku25qdvvmtxjvr3pruth5vunnwuiyrdjmsxaxte5fyuy3sis3ywnzzlqkmvphzlzqv0tyys9jrwkzl3hcrndjwfbtvtfgzkz0rdzru3hpmvl6szhroxn2depmrxbay0l4c2gzogrjm3nreqpicmrmsmpkbetoehcvwte1qnjmaxawbhboufvpwwhfwkdcmghvwgdwaxljdkjisjznsjvky09lsevgbtmxk2hcclj2buxtzs8waelbquvsnffkb2tvqvflqmdrqzd6q3fivjv3uenmukzsdw02yu9kmxvjnglxwkhqbvbsu3njszqkbs9otdq4ymzmbm9em01jnmnxvu9dothdr1v6uxnxykvznmptynbsv2dcovkxcgg1ulbxq0pkskpvmzc3eglxaqoxdwnyoejmtktwtm45b1ozc3par2ncte9itkhycuzsnwuxeujvawvrtlrschfnnwfkvhnxdwu2vegzsk1tnkn0ckzoexzruutcz0vly05hn2uzntrjcetlmgowz0dzwxzuvxnfovrkskrzdmhjymrhl0e5tezvv2trngfhvmt2slgksg5yb29zwdg2ou1fm2yyahnxumvsmng2a1phzwf6zwnnwld0rzkvr2kzwxrqvwjzvktzu2pwzdvhskfrm2k1cwpevwrmmfzuu1qrr0hjrduzl2nkk3m3wna1wkfzdwx1czdnqmfpwkdzoux6vxy5ahirrzjwci0tls0tru5eifjtqsbquklwqvrfietfws0tls0tcg== ```

  • 序号1:保持不变既可,默认是kubernetes

  • 序号2:填写kubeconfig文件中的clusters.cluster.server的地址

  • 序号3: 获取kubeconfig文件中的certificate-authority-data的内容并转化成base64 encoded文件

ecbo xxx | base64 -d > /opt/crt/ca.crt

ca.crt的内容填写到jenkins kuberneteskubernetes服务证书key栏中获取kubeconfigclient-certificate-dataclient-key-data的内容并转化成base64 encoded文件

echo xxxxx== | base64 -d > /opt/crt/client.crt
echo xxxxx== | base64 -d > /opt/crt/client.key
 
 
# 生成client p12认证文件cert.pfx,并下载至本地
openssl pkcs12 -export -out /opt/crt/cert.pfx -inkey /opt/crt/client.key -in /opt/crt/client.crt -certfile /opt/crt/ca.crt
enter export password:
verifying - enter export password:
 
# 注:自定义一个password并牢记
  • 序号4: 在云kubernetes中添加凭证

注意: upload certificate上次刚生成并下载至本地的cert.pfx文件,password值添加生成cert.pfx文件时输入的密钥,然后在序号4中选择这个证书

最后点击连接测试:出现代表测试成功

6. 测试并验证

好了,通过kubernetes安装jenkins master完毕并且已经配置好了连接,接下来,我们可以配置 一个job测试以下是否会成功发布

pipeline 类型支持

创建一个pipeline类型job并命名为my-k8s-jenkins-pipeline,然后pipeline脚本处填写一个简单的测试脚本如下:

def label = "mypod-${uuid.randomuuid().tostring()}"
podtemplate(label: label, cloud: 'kubernetes') {
    node(label) {
        stage('run shell') {
            sh 'sleep 130s'
            sh 'echo hello world.'
        }
    }
}

执行构建,此时去构建队列里面,可以看到有一个构建任务,点击立即构建之后,等初始化成功之后,会成功发布,我们通过kubectl命令行,可以看到整个自动创建和删除过程。

注意: 示例中使用的镜像,如果直接拉取会有超时的情况,可以使用以下命令在机器上提前执行

docker pull hub.baidubce.com/jpaas-public/jenkins/jnlp-slave:v0
docker tag hub.baidubce.com/jpaas-public/jenkins/jnlp-slave:v0 jenkins/jnlp-slave:4.0.1-1
网站地图